2025-10-08 14:53:38.244361 | Job console starting 2025-10-08 14:53:38.262393 | Updating git repos 2025-10-08 14:53:38.378923 | Cloning repos into workspace 2025-10-08 14:53:38.594988 | Restoring repo states 2025-10-08 14:53:38.616090 | Merging changes 2025-10-08 14:53:38.616107 | Checking out repos 2025-10-08 14:53:38.969057 | Preparing playbooks 2025-10-08 14:53:39.570429 | Running Ansible setup 2025-10-08 14:53:43.636867 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2025-10-08 14:53:44.372062 | 2025-10-08 14:53:44.372242 | PLAY [Base pre] 2025-10-08 14:53:44.389063 | 2025-10-08 14:53:44.389201 | TASK [Setup log path fact] 2025-10-08 14:53:44.419264 | orchestrator | ok 2025-10-08 14:53:44.436496 | 2025-10-08 14:53:44.436674 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-10-08 14:53:44.475914 | orchestrator | ok 2025-10-08 14:53:44.487614 | 2025-10-08 14:53:44.487720 | TASK [emit-job-header : Print job information] 2025-10-08 14:53:44.531140 | # Job Information 2025-10-08 14:53:44.531398 | Ansible Version: 2.16.14 2025-10-08 14:53:44.531450 | Job: testbed-deploy-in-a-nutshell-ubuntu-24.04 2025-10-08 14:53:44.531500 | Pipeline: post 2025-10-08 14:53:44.531535 | Executor: 521e9411259a 2025-10-08 14:53:44.531567 | Triggered by: https://github.com/osism/testbed/commit/3c9f43260b0dd1bb6a8289edb9cd0ffddad50887 2025-10-08 14:53:44.531598 | Event ID: 8e0c2044-a456-11f0-8a27-c128f96c82c5 2025-10-08 14:53:44.539775 | 2025-10-08 14:53:44.539887 | LOOP [emit-job-header : Print node information] 2025-10-08 14:53:44.659622 | orchestrator | ok: 2025-10-08 14:53:44.659981 | orchestrator | # Node Information 2025-10-08 14:53:44.660035 | orchestrator | Inventory Hostname: orchestrator 2025-10-08 14:53:44.660060 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2025-10-08 14:53:44.660082 | orchestrator | Username: zuul-testbed05 2025-10-08 14:53:44.660103 | orchestrator | Distro: Debian 12.12 2025-10-08 14:53:44.660127 | orchestrator | Provider: static-testbed 2025-10-08 14:53:44.660148 | orchestrator | Region: 2025-10-08 14:53:44.660199 | orchestrator | Label: testbed-orchestrator 2025-10-08 14:53:44.660221 | orchestrator | Product Name: OpenStack Nova 2025-10-08 14:53:44.660241 | orchestrator | Interface IP: 81.163.193.140 2025-10-08 14:53:44.692518 | 2025-10-08 14:53:44.692661 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2025-10-08 14:53:45.174307 | orchestrator -> localhost | changed 2025-10-08 14:53:45.183394 | 2025-10-08 14:53:45.183515 | TASK [log-inventory : Copy ansible inventory to logs dir] 2025-10-08 14:53:46.210374 | orchestrator -> localhost | changed 2025-10-08 14:53:46.224514 | 2025-10-08 14:53:46.224635 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2025-10-08 14:53:46.477808 | orchestrator -> localhost | ok 2025-10-08 14:53:46.489458 | 2025-10-08 14:53:46.489612 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2025-10-08 14:53:46.519964 | orchestrator | ok 2025-10-08 14:53:46.536278 | orchestrator | included: /var/lib/zuul/builds/508b257440874fc3bc38a3dc0806d28d/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2025-10-08 14:53:46.544435 | 2025-10-08 14:53:46.544534 | TASK [add-build-sshkey : Create Temp SSH key] 2025-10-08 14:53:47.356838 | orchestrator -> localhost | Generating public/private rsa key pair. 2025-10-08 14:53:47.357143 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/508b257440874fc3bc38a3dc0806d28d/work/508b257440874fc3bc38a3dc0806d28d_id_rsa 2025-10-08 14:53:47.357249 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/508b257440874fc3bc38a3dc0806d28d/work/508b257440874fc3bc38a3dc0806d28d_id_rsa.pub 2025-10-08 14:53:47.357288 | orchestrator -> localhost | The key fingerprint is: 2025-10-08 14:53:47.357325 | orchestrator -> localhost | SHA256:92f2jKslSiMDs7a1FCCm0bhDNc918qx8fS9paNrQlEA zuul-build-sshkey 2025-10-08 14:53:47.357359 | orchestrator -> localhost | The key's randomart image is: 2025-10-08 14:53:47.357406 | orchestrator -> localhost | +---[RSA 3072]----+ 2025-10-08 14:53:47.357437 | orchestrator -> localhost | | o o E | 2025-10-08 14:53:47.357469 | orchestrator -> localhost | | + + . * | 2025-10-08 14:53:47.357498 | orchestrator -> localhost | | + + + + | 2025-10-08 14:53:47.357527 | orchestrator -> localhost | | . = . o . o . | 2025-10-08 14:53:47.357557 | orchestrator -> localhost | | + o S o + . | 2025-10-08 14:53:47.357592 | orchestrator -> localhost | | . + + + o o | 2025-10-08 14:53:47.357624 | orchestrator -> localhost | | o = + * O .| 2025-10-08 14:53:47.357654 | orchestrator -> localhost | | . + = B B = | 2025-10-08 14:53:47.357685 | orchestrator -> localhost | | . . o o.o.o| 2025-10-08 14:53:47.357715 | orchestrator -> localhost | +----[SHA256]-----+ 2025-10-08 14:53:47.357790 | orchestrator -> localhost | ok: Runtime: 0:00:00.300978 2025-10-08 14:53:47.368127 | 2025-10-08 14:53:47.368286 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2025-10-08 14:53:47.415275 | orchestrator | ok 2025-10-08 14:53:47.430070 | orchestrator | included: /var/lib/zuul/builds/508b257440874fc3bc38a3dc0806d28d/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2025-10-08 14:53:47.440200 | 2025-10-08 14:53:47.440299 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2025-10-08 14:53:47.474120 | orchestrator | skipping: Conditional result was False 2025-10-08 14:53:47.486496 | 2025-10-08 14:53:47.486630 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2025-10-08 14:53:48.082619 | orchestrator | changed 2025-10-08 14:53:48.094389 | 2025-10-08 14:53:48.094529 | TASK [add-build-sshkey : Make sure user has a .ssh] 2025-10-08 14:53:48.372811 | orchestrator | ok 2025-10-08 14:53:48.381959 | 2025-10-08 14:53:48.382080 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2025-10-08 14:53:49.118071 | orchestrator | ok 2025-10-08 14:53:49.126561 | 2025-10-08 14:53:49.126696 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2025-10-08 14:53:49.508741 | orchestrator | ok 2025-10-08 14:53:49.517708 | 2025-10-08 14:53:49.517831 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2025-10-08 14:53:49.552430 | orchestrator | skipping: Conditional result was False 2025-10-08 14:53:49.563048 | 2025-10-08 14:53:49.563225 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2025-10-08 14:53:50.011976 | orchestrator -> localhost | changed 2025-10-08 14:53:50.026597 | 2025-10-08 14:53:50.026718 | TASK [add-build-sshkey : Add back temp key] 2025-10-08 14:53:50.346201 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/508b257440874fc3bc38a3dc0806d28d/work/508b257440874fc3bc38a3dc0806d28d_id_rsa (zuul-build-sshkey) 2025-10-08 14:53:50.346941 | orchestrator -> localhost | ok: Runtime: 0:00:00.017984 2025-10-08 14:53:50.361383 | 2025-10-08 14:53:50.361552 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2025-10-08 14:53:50.789177 | orchestrator | ok 2025-10-08 14:53:50.798454 | 2025-10-08 14:53:50.798587 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2025-10-08 14:53:50.832892 | orchestrator | skipping: Conditional result was False 2025-10-08 14:53:50.890364 | 2025-10-08 14:53:50.890492 | TASK [start-zuul-console : Start zuul_console daemon.] 2025-10-08 14:53:51.283719 | orchestrator | ok 2025-10-08 14:53:51.298107 | 2025-10-08 14:53:51.298264 | TASK [validate-host : Define zuul_info_dir fact] 2025-10-08 14:53:51.343700 | orchestrator | ok 2025-10-08 14:53:51.353721 | 2025-10-08 14:53:51.353840 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2025-10-08 14:53:51.631390 | orchestrator -> localhost | ok 2025-10-08 14:53:51.643734 | 2025-10-08 14:53:51.643891 | TASK [validate-host : Collect information about the host] 2025-10-08 14:53:53.821471 | orchestrator | ok 2025-10-08 14:53:53.839352 | 2025-10-08 14:53:53.839470 | TASK [validate-host : Sanitize hostname] 2025-10-08 14:53:53.905231 | orchestrator | ok 2025-10-08 14:53:53.914014 | 2025-10-08 14:53:53.914238 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2025-10-08 14:53:54.475531 | orchestrator -> localhost | changed 2025-10-08 14:53:54.487129 | 2025-10-08 14:53:54.487297 | TASK [validate-host : Collect information about zuul worker] 2025-10-08 14:53:54.917969 | orchestrator | ok 2025-10-08 14:53:54.927580 | 2025-10-08 14:53:54.927722 | TASK [validate-host : Write out all zuul information for each host] 2025-10-08 14:53:55.471105 | orchestrator -> localhost | changed 2025-10-08 14:53:55.482083 | 2025-10-08 14:53:55.482214 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2025-10-08 14:53:55.761564 | orchestrator | ok 2025-10-08 14:53:55.771381 | 2025-10-08 14:53:55.771514 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2025-10-08 14:54:37.914281 | orchestrator | changed: 2025-10-08 14:54:37.914505 | orchestrator | .d..t...... src/ 2025-10-08 14:54:37.914541 | orchestrator | .d..t...... src/github.com/ 2025-10-08 14:54:37.914566 | orchestrator | .d..t...... src/github.com/osism/ 2025-10-08 14:54:37.914588 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2025-10-08 14:54:37.914610 | orchestrator | RedHat.yml 2025-10-08 14:54:37.928087 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2025-10-08 14:54:37.928104 | orchestrator | RedHat.yml 2025-10-08 14:54:37.928156 | orchestrator | = 1.53.0"... 2025-10-08 14:54:51.861037 | orchestrator | 14:54:51.860 STDOUT terraform: - Finding hashicorp/local versions matching ">= 2.2.0"... 2025-10-08 14:54:52.011626 | orchestrator | 14:54:52.011 STDOUT terraform: - Installing hashicorp/null v3.2.4... 2025-10-08 14:54:52.447078 | orchestrator | 14:54:52.446 STDOUT terraform: - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2025-10-08 14:54:52.854352 | orchestrator | 14:54:52.854 STDOUT terraform: - Installing terraform-provider-openstack/openstack v3.3.2... 2025-10-08 14:54:53.715247 | orchestrator | 14:54:53.715 STDOUT terraform: - Installed terraform-provider-openstack/openstack v3.3.2 (signed, key ID 4F80527A391BEFD2) 2025-10-08 14:54:54.086968 | orchestrator | 14:54:54.086 STDOUT terraform: - Installing hashicorp/local v2.5.3... 2025-10-08 14:54:54.875451 | orchestrator | 14:54:54.875 STDOUT terraform: - Installed hashicorp/local v2.5.3 (signed, key ID 0C0AF313E5FD9F80) 2025-10-08 14:54:54.875522 | orchestrator | 14:54:54.875 STDOUT terraform: Providers are signed by their developers. 2025-10-08 14:54:54.875566 | orchestrator | 14:54:54.875 STDOUT terraform: If you'd like to know more about provider signing, you can read about it here: 2025-10-08 14:54:54.875575 | orchestrator | 14:54:54.875 STDOUT terraform: https://opentofu.org/docs/cli/plugins/signing/ 2025-10-08 14:54:54.875582 | orchestrator | 14:54:54.875 STDOUT terraform: OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2025-10-08 14:54:54.875600 | orchestrator | 14:54:54.875 STDOUT terraform: selections it made above. Include this file in your version control repository 2025-10-08 14:54:54.875638 | orchestrator | 14:54:54.875 STDOUT terraform: so that OpenTofu can guarantee to make the same selections by default when 2025-10-08 14:54:54.875649 | orchestrator | 14:54:54.875 STDOUT terraform: you run "tofu init" in the future. 2025-10-08 14:54:54.875704 | orchestrator | 14:54:54.875 STDOUT terraform: OpenTofu has been successfully initialized! 2025-10-08 14:54:54.875832 | orchestrator | 14:54:54.875 STDOUT terraform: You may now begin working with OpenTofu. Try running "tofu plan" to see 2025-10-08 14:54:54.875892 | orchestrator | 14:54:54.875 STDOUT terraform: any changes that are required for your infrastructure. All OpenTofu commands 2025-10-08 14:54:54.875911 | orchestrator | 14:54:54.875 STDOUT terraform: should now work. 2025-10-08 14:54:54.875918 | orchestrator | 14:54:54.875 STDOUT terraform: If you ever set or change modules or backend configuration for OpenTofu, 2025-10-08 14:54:54.875924 | orchestrator | 14:54:54.875 STDOUT terraform: rerun this command to reinitialize your working directory. If you forget, other 2025-10-08 14:54:54.875933 | orchestrator | 14:54:54.875 STDOUT terraform: commands will detect it and remind you to do so if necessary. 2025-10-08 14:54:55.158261 | orchestrator | 14:54:55.158 STDOUT terraform: Created and switched to workspace "ci"! 2025-10-08 14:54:55.158316 | orchestrator | 14:54:55.158 STDOUT terraform: You're now on a new, empty workspace. Workspaces isolate their state, 2025-10-08 14:54:55.158333 | orchestrator | 14:54:55.158 STDOUT terraform: so if you run "tofu plan" OpenTofu will not see any existing state 2025-10-08 14:54:55.158341 | orchestrator | 14:54:55.158 STDOUT terraform: for this configuration. 2025-10-08 14:54:55.361385 | orchestrator | 14:54:55.361 STDOUT terraform: ci.auto.tfvars 2025-10-08 14:54:55.368447 | orchestrator | 14:54:55.368 STDOUT terraform: default_custom.tf 2025-10-08 14:54:56.400023 | orchestrator | 14:54:56.399 STDOUT terraform: data.openstack_networking_network_v2.public: Reading... 2025-10-08 14:54:56.937045 | orchestrator | 14:54:56.936 STDOUT terraform: data.openstack_networking_network_v2.public: Read complete after 1s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2025-10-08 14:54:57.205615 | orchestrator | 14:54:57.205 STDOUT terraform: OpenTofu used the selected providers to generate the following execution 2025-10-08 14:54:57.205686 | orchestrator | 14:54:57.205 STDOUT terraform: plan. Resource actions are indicated with the following symbols: 2025-10-08 14:54:57.205694 | orchestrator | 14:54:57.205 STDOUT terraform:  + create 2025-10-08 14:54:57.205837 | orchestrator | 14:54:57.205 STDOUT terraform:  <= read (data resources) 2025-10-08 14:54:57.205897 | orchestrator | 14:54:57.205 STDOUT terraform: OpenTofu will perform the following actions: 2025-10-08 14:54:57.206030 | orchestrator | 14:54:57.205 STDOUT terraform:  # data.openstack_images_image_v2.image will be read during apply 2025-10-08 14:54:57.206193 | orchestrator | 14:54:57.205 STDOUT terraform:  # (config refers to values not yet known) 2025-10-08 14:54:57.206243 | orchestrator | 14:54:57.205 STDOUT terraform:  <= data "openstack_images_image_v2" "image" { 2025-10-08 14:54:57.206323 | orchestrator | 14:54:57.205 STDOUT terraform:  + checksum = (known after apply) 2025-10-08 14:54:57.206412 | orchestrator | 14:54:57.206 STDOUT terraform:  + created_at = (known after apply) 2025-10-08 14:54:57.206488 | orchestrator | 14:54:57.206 STDOUT terraform:  + file = (known after apply) 2025-10-08 14:54:57.206513 | orchestrator | 14:54:57.206 STDOUT terraform:  + id = (known after apply) 2025-10-08 14:54:57.206623 | orchestrator | 14:54:57.206 STDOUT terraform:  + metadata = (known after apply) 2025-10-08 14:54:57.206632 | orchestrator | 14:54:57.206 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-10-08 14:54:57.206636 | orchestrator | 14:54:57.206 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-10-08 14:54:57.206640 | orchestrator | 14:54:57.206 STDOUT terraform:  + most_recent = true 2025-10-08 14:54:57.206644 | orchestrator | 14:54:57.206 STDOUT terraform:  + name = (known after apply) 2025-10-08 14:54:57.206648 | orchestrator | 14:54:57.206 STDOUT terraform:  + protected = (known after apply) 2025-10-08 14:54:57.206652 | orchestrator | 14:54:57.206 STDOUT terraform:  + region = (known after apply) 2025-10-08 14:54:57.206656 | orchestrator | 14:54:57.206 STDOUT terraform:  + schema = (known after apply) 2025-10-08 14:54:57.206662 | orchestrator | 14:54:57.206 STDOUT terraform:  + size_bytes = (known after apply) 2025-10-08 14:54:57.206666 | orchestrator | 14:54:57.206 STDOUT terraform:  + tags = (known after apply) 2025-10-08 14:54:57.206670 | orchestrator | 14:54:57.206 STDOUT terraform:  + updated_at = (known after apply) 2025-10-08 14:54:57.206689 | orchestrator | 14:54:57.206 STDOUT terraform:  } 2025-10-08 14:54:57.206693 | orchestrator | 14:54:57.206 STDOUT terraform:  # data.openstack_images_image_v2.image_node will be read during apply 2025-10-08 14:54:57.206698 | orchestrator | 14:54:57.206 STDOUT terraform:  # (config refers to values not yet known) 2025-10-08 14:54:57.206702 | orchestrator | 14:54:57.206 STDOUT terraform:  <= data "openstack_images_image_v2" "image_node" { 2025-10-08 14:54:57.206705 | orchestrator | 14:54:57.206 STDOUT terraform:  + checksum = (known after apply) 2025-10-08 14:54:57.206709 | orchestrator | 14:54:57.206 STDOUT terraform:  + created_at = (known after apply) 2025-10-08 14:54:57.206713 | orchestrator | 14:54:57.206 STDOUT terraform:  + file = (known after apply) 2025-10-08 14:54:57.206721 | orchestrator | 14:54:57.206 STDOUT terraform:  + id = (known after apply) 2025-10-08 14:54:57.206725 | orchestrator | 14:54:57.206 STDOUT terraform:  + metadata = (known after apply) 2025-10-08 14:54:57.206730 | orchestrator | 14:54:57.206 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-10-08 14:54:57.206813 | orchestrator | 14:54:57.206 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-10-08 14:54:57.206884 | orchestrator | 14:54:57.206 STDOUT terraform:  + most_recent = true 2025-10-08 14:54:57.206892 | orchestrator | 14:54:57.206 STDOUT terraform:  + name = (known after apply) 2025-10-08 14:54:57.206896 | orchestrator | 14:54:57.206 STDOUT terraform:  + protected = (known after apply) 2025-10-08 14:54:57.206900 | orchestrator | 14:54:57.206 STDOUT terraform:  + region = (known after apply) 2025-10-08 14:54:57.206903 | orchestrator | 14:54:57.206 STDOUT terraform:  + schema = (known after apply) 2025-10-08 14:54:57.206909 | orchestrator | 14:54:57.206 STDOUT terraform:  + size_bytes = (known after apply) 2025-10-08 14:54:57.206938 | orchestrator | 14:54:57.206 STDOUT terraform:  + tags = (known after apply) 2025-10-08 14:54:57.206944 | orchestrator | 14:54:57.206 STDOUT terraform:  + updated_at = (known after apply) 2025-10-08 14:54:57.206984 | orchestrator | 14:54:57.206 STDOUT terraform:  } 2025-10-08 14:54:57.207090 | orchestrator | 14:54:57.207 STDOUT terraform:  # local_file.MANAGER_ADDRESS will be created 2025-10-08 14:54:57.207666 | orchestrator | 14:54:57.207 STDOUT terraform:  + resource "local_file" "MANAGER_ADDRESS" { 2025-10-08 14:54:57.207826 | orchestrator | 14:54:57.207 STDOUT terraform:  + content = (known after apply) 2025-10-08 14:54:57.208080 | orchestrator | 14:54:57.207 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-10-08 14:54:57.208369 | orchestrator | 14:54:57.207 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-10-08 14:54:57.208374 | orchestrator | 14:54:57.207 STDOUT terraform:  + content_md5 = (known after apply) 2025-10-08 14:54:57.208527 | orchestrator | 14:54:57.207 STDOUT terraform:  + content_sha1 = (known after apply) 2025-10-08 14:54:57.208571 | orchestrator | 14:54:57.207 STDOUT terraform:  + content_sha256 = (known after apply) 2025-10-08 14:54:57.208678 | orchestrator | 14:54:57.207 STDOUT terraform:  + content_sha512 = (known after apply) 2025-10-08 14:54:57.208684 | orchestrator | 14:54:57.207 STDOUT terraform:  + directory_permission = "0777" 2025-10-08 14:54:57.208903 | orchestrator | 14:54:57.207 STDOUT terraform:  + file_permission = "0644" 2025-10-08 14:54:57.208908 | orchestrator | 14:54:57.207 STDOUT terraform:  + filename = ".MANAGER_ADDRESS.ci" 2025-10-08 14:54:57.209108 | orchestrator | 14:54:57.207 STDOUT terraform:  + id = (known after apply) 2025-10-08 14:54:57.209245 | orchestrator | 14:54:57.207 STDOUT terraform:  } 2025-10-08 14:54:57.209251 | orchestrator | 14:54:57.207 STDOUT terraform:  # local_file.id_rsa_pub will be created 2025-10-08 14:54:57.209316 | orchestrator | 14:54:57.207 STDOUT terraform:  + resource "local_file" "id_rsa_pub" { 2025-10-08 14:54:57.209320 | orchestrator | 14:54:57.207 STDOUT terraform:  + content = (known after apply) 2025-10-08 14:54:57.209652 | orchestrator | 14:54:57.207 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-10-08 14:54:57.209657 | orchestrator | 14:54:57.207 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-10-08 14:54:57.212902 | orchestrator | 14:54:57.207 STDOUT terraform:  + content_md5 = (known after apply) 2025-10-08 14:54:57.212909 | orchestrator | 14:54:57.207 STDOUT terraform:  + content_sha1 = (known after apply) 2025-10-08 14:54:57.212916 | orchestrator | 14:54:57.207 STDOUT terraform:  + content_sha256 = (known after apply) 2025-10-08 14:54:57.212920 | orchestrator | 14:54:57.207 STDOUT terraform:  + content_sha512 = (known after apply) 2025-10-08 14:54:57.212924 | orchestrator | 14:54:57.207 STDOUT terraform:  + directory_permission = "0777" 2025-10-08 14:54:57.212927 | orchestrator | 14:54:57.207 STDOUT terraform:  + file_permission = "0644" 2025-10-08 14:54:57.212931 | orchestrator | 14:54:57.207 STDOUT terraform:  + filename = ".id_rsa.ci.pub" 2025-10-08 14:54:57.212935 | orchestrator | 14:54:57.207 STDOUT terraform:  + id = (known after apply) 2025-10-08 14:54:57.212939 | orchestrator | 14:54:57.207 STDOUT terraform:  } 2025-10-08 14:54:57.212943 | orchestrator | 14:54:57.207 STDOUT terraform:  # local_file.inventory will be created 2025-10-08 14:54:57.212946 | orchestrator | 14:54:57.207 STDOUT terraform:  + resource "local_file" "inventory" { 2025-10-08 14:54:57.212950 | orchestrator | 14:54:57.207 STDOUT terraform:  + content = (known after apply) 2025-10-08 14:54:57.212954 | orchestrator | 14:54:57.208 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-10-08 14:54:57.212958 | orchestrator | 14:54:57.208 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-10-08 14:54:57.212962 | orchestrator | 14:54:57.208 STDOUT terraform:  + content_md5 = (known after apply) 2025-10-08 14:54:57.212965 | orchestrator | 14:54:57.208 STDOUT terraform:  + content_sha1 = (known after apply) 2025-10-08 14:54:57.212969 | orchestrator | 14:54:57.208 STDOUT terraform:  + content_sha256 = (known after apply) 2025-10-08 14:54:57.212973 | orchestrator | 14:54:57.208 STDOUT terraform:  + content_sha512 = (known after apply) 2025-10-08 14:54:57.212977 | orchestrator | 14:54:57.208 STDOUT terraform:  + directory_permission = "0777" 2025-10-08 14:54:57.212980 | orchestrator | 14:54:57.208 STDOUT terraform:  + file_permission = "0644" 2025-10-08 14:54:57.212984 | orchestrator | 14:54:57.208 STDOUT terraform:  + filename = "inventory.ci" 2025-10-08 14:54:57.212998 | orchestrator | 14:54:57.208 STDOUT terraform:  + id = (known after apply) 2025-10-08 14:54:57.213002 | orchestrator | 14:54:57.208 STDOUT terraform:  } 2025-10-08 14:54:57.213006 | orchestrator | 14:54:57.208 STDOUT terraform:  # local_sensitive_file.id_rsa will be created 2025-10-08 14:54:57.213009 | orchestrator | 14:54:57.208 STDOUT terraform:  + resource "local_sensitive_file" "id_rsa" { 2025-10-08 14:54:57.213013 | orchestrator | 14:54:57.208 STDOUT terraform:  + content = (sensitive value) 2025-10-08 14:54:57.213017 | orchestrator | 14:54:57.208 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-10-08 14:54:57.213021 | orchestrator | 14:54:57.208 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-10-08 14:54:57.213024 | orchestrator | 14:54:57.208 STDOUT terraform:  + content_md5 = (known after apply) 2025-10-08 14:54:57.213028 | orchestrator | 14:54:57.208 STDOUT terraform:  + content_sha1 = (known after apply) 2025-10-08 14:54:57.213036 | orchestrator | 14:54:57.208 STDOUT terraform:  + content_sha256 = (known after apply) 2025-10-08 14:54:57.213040 | orchestrator | 14:54:57.208 STDOUT terraform:  + content_sha512 = (known after apply) 2025-10-08 14:54:57.213044 | orchestrator | 14:54:57.208 STDOUT terraform:  + directory_permission = "0700" 2025-10-08 14:54:57.213048 | orchestrator | 14:54:57.208 STDOUT terraform:  + file_permission = "0600" 2025-10-08 14:54:57.213052 | orchestrator | 14:54:57.208 STDOUT terraform:  + filename = ".id_rsa.ci" 2025-10-08 14:54:57.213055 | orchestrator | 14:54:57.208 STDOUT terraform:  + id = (known after apply) 2025-10-08 14:54:57.213059 | orchestrator | 14:54:57.208 STDOUT terraform:  } 2025-10-08 14:54:57.213063 | orchestrator | 14:54:57.208 STDOUT terraform:  # null_resource.node_semaphore will be created 2025-10-08 14:54:57.213067 | orchestrator | 14:54:57.208 STDOUT terraform:  + resource "null_resource" "node_semaphore" { 2025-10-08 14:54:57.213071 | orchestrator | 14:54:57.208 STDOUT terraform:  + id = (known after apply) 2025-10-08 14:54:57.213074 | orchestrator | 14:54:57.208 STDOUT terraform:  } 2025-10-08 14:54:57.213078 | orchestrator | 14:54:57.208 STDOUT terraform:  # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2025-10-08 14:54:57.213082 | orchestrator | 14:54:57.208 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2025-10-08 14:54:57.213086 | orchestrator | 14:54:57.209 STDOUT terraform:  + attachment = (known after apply) 2025-10-08 14:54:57.213090 | orchestrator | 14:54:57.209 STDOUT terraform:  + availability_zone = "nova" 2025-10-08 14:54:57.213094 | orchestrator | 14:54:57.209 STDOUT terraform:  + id = (known after apply) 2025-10-08 14:54:57.213097 | orchestrator | 14:54:57.209 STDOUT terraform:  + image_id = (known after apply) 2025-10-08 14:54:57.213101 | orchestrator | 14:54:57.209 STDOUT terraform:  + metadata = (known after apply) 2025-10-08 14:54:57.213105 | orchestrator | 14:54:57.209 STDOUT terraform:  + name = "testbed-volume-manager-base" 2025-10-08 14:54:57.213109 | orchestrator | 14:54:57.209 STDOUT terraform:  + region = (known after apply) 2025-10-08 14:54:57.213112 | orchestrator | 14:54:57.209 STDOUT terraform:  + size = 80 2025-10-08 14:54:57.213119 | orchestrator | 14:54:57.209 STDOUT terraform:  + volume_retype_policy = "never" 2025-10-08 14:54:57.213123 | orchestrator | 14:54:57.209 STDOUT terraform:  + volume_type = "ssd" 2025-10-08 14:54:57.213127 | orchestrator | 14:54:57.209 STDOUT terraform:  } 2025-10-08 14:54:57.213133 | orchestrator | 14:54:57.209 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2025-10-08 14:54:57.213137 | orchestrator | 14:54:57.209 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-10-08 14:54:57.213140 | orchestrator | 14:54:57.209 STDOUT terraform:  + attachment = (known after apply) 2025-10-08 14:54:57.213144 | orchestrator | 14:54:57.209 STDOUT terraform:  + availability_zone = "nova" 2025-10-08 14:54:57.213148 | orchestrator | 14:54:57.209 STDOUT terraform:  + id = (known after apply) 2025-10-08 14:54:57.213152 | orchestrator | 14:54:57.209 STDOUT terraform:  + image_id = (known after apply) 2025-10-08 14:54:57.213155 | orchestrator | 14:54:57.209 STDOUT terraform:  + metadata = (known after apply) 2025-10-08 14:54:57.213159 | orchestrator | 14:54:57.209 STDOUT terraform:  + name = "testbed-volume-0-node-base" 2025-10-08 14:54:57.213163 | orchestrator | 14:54:57.209 STDOUT terraform:  + region = (known after apply) 2025-10-08 14:54:57.213166 | orchestrator | 14:54:57.209 STDOUT terraform:  + size = 80 2025-10-08 14:54:57.213170 | orchestrator | 14:54:57.209 STDOUT terraform:  + volume_retype_policy = "never" 2025-10-08 14:54:57.213174 | orchestrator | 14:54:57.209 STDOUT terraform:  + volume_type = "ssd" 2025-10-08 14:54:57.213178 | orchestrator | 14:54:57.209 STDOUT terraform:  } 2025-10-08 14:54:57.213181 | orchestrator | 14:54:57.209 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2025-10-08 14:54:57.213190 | orchestrator | 14:54:57.209 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-10-08 14:54:57.213194 | orchestrator | 14:54:57.209 STDOUT terraform:  + attachment = (known after apply) 2025-10-08 14:54:57.213198 | orchestrator | 14:54:57.209 STDOUT terraform:  + availability_zone = "nova" 2025-10-08 14:54:57.213201 | orchestrator | 14:54:57.209 STDOUT terraform:  + id = (known after apply) 2025-10-08 14:54:57.213205 | orchestrator | 14:54:57.210 STDOUT terraform:  + image_id = (known after apply) 2025-10-08 14:54:57.213209 | orchestrator | 14:54:57.210 STDOUT terraform:  + metadata = (known after apply) 2025-10-08 14:54:57.213213 | orchestrator | 14:54:57.210 STDOUT terraform:  + name = "testbed-volume-1-node-base" 2025-10-08 14:54:57.213217 | orchestrator | 14:54:57.210 STDOUT terraform:  + region = (known after apply) 2025-10-08 14:54:57.213220 | orchestrator | 14:54:57.210 STDOUT terraform:  + size = 80 2025-10-08 14:54:57.213224 | orchestrator | 14:54:57.210 STDOUT terraform:  + volume_retype_policy = "never" 2025-10-08 14:54:57.213228 | orchestrator | 14:54:57.210 STDOUT terraform:  + volume_type = "ssd" 2025-10-08 14:54:57.213231 | orchestrator | 14:54:57.210 STDOUT terraform:  } 2025-10-08 14:54:57.213235 | orchestrator | 14:54:57.210 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2025-10-08 14:54:57.213243 | orchestrator | 14:54:57.210 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-10-08 14:54:57.213247 | orchestrator | 14:54:57.210 STDOUT terraform:  + attachment = (known after apply) 2025-10-08 14:54:57.213250 | orchestrator | 14:54:57.210 STDOUT terraform:  + availability_zone = "nova" 2025-10-08 14:54:57.213254 | orchestrator | 14:54:57.210 STDOUT terraform:  + id = (known after apply) 2025-10-08 14:54:57.213258 | orchestrator | 14:54:57.210 STDOUT terraform:  + image_id = (known after apply) 2025-10-08 14:54:57.213262 | orchestrator | 14:54:57.210 STDOUT terraform:  + metadata = (known after apply) 2025-10-08 14:54:57.213266 | orchestrator | 14:54:57.210 STDOUT terraform:  + name = "testbed-volume-2-node-base" 2025-10-08 14:54:57.213269 | orchestrator | 14:54:57.210 STDOUT terraform:  + region = (known after apply) 2025-10-08 14:54:57.213273 | orchestrator | 14:54:57.210 STDOUT terraform:  + size = 80 2025-10-08 14:54:57.213277 | orchestrator | 14:54:57.210 STDOUT terraform:  + volume_retype_policy = "never" 2025-10-08 14:54:57.213281 | orchestrator | 14:54:57.210 STDOUT terraform:  + volume_type = "ssd" 2025-10-08 14:54:57.213284 | orchestrator | 14:54:57.210 STDOUT terraform:  } 2025-10-08 14:54:57.213288 | orchestrator | 14:54:57.210 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2025-10-08 14:54:57.213292 | orchestrator | 14:54:57.210 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-10-08 14:54:57.213298 | orchestrator | 14:54:57.210 STDOUT terraform:  + attachment = (known after apply) 2025-10-08 14:54:57.213302 | orchestrator | 14:54:57.210 STDOUT terraform:  + availability_zone = "nova" 2025-10-08 14:54:57.213306 | orchestrator | 14:54:57.210 STDOUT terraform:  + id = (known after apply) 2025-10-08 14:54:57.213309 | orchestrator | 14:54:57.210 STDOUT terraform:  + image_id = (known after apply) 2025-10-08 14:54:57.213313 | orchestrator | 14:54:57.210 STDOUT terraform:  + metadata = (known after apply) 2025-10-08 14:54:57.213317 | orchestrator | 14:54:57.210 STDOUT terraform:  + name = "testbed-volume-3-node-base" 2025-10-08 14:54:57.213321 | orchestrator | 14:54:57.211 STDOUT terraform:  + region = (known after apply) 2025-10-08 14:54:57.213324 | orchestrator | 14:54:57.211 STDOUT terraform:  + size = 80 2025-10-08 14:54:57.213328 | orchestrator | 14:54:57.211 STDOUT terraform:  + volume_retype_policy = "never" 2025-10-08 14:54:57.213332 | orchestrator | 14:54:57.211 STDOUT terraform:  + volume_type = "ssd" 2025-10-08 14:54:57.213339 | orchestrator | 14:54:57.211 STDOUT terraform:  } 2025-10-08 14:54:57.213343 | orchestrator | 14:54:57.211 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2025-10-08 14:54:57.213346 | orchestrator | 14:54:57.211 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-10-08 14:54:57.213350 | orchestrator | 14:54:57.211 STDOUT terraform:  + attachment = (known after apply) 2025-10-08 14:54:57.213357 | orchestrator | 14:54:57.211 STDOUT terraform:  + availability_zone = "nova" 2025-10-08 14:54:57.213361 | orchestrator | 14:54:57.211 STDOUT terraform:  + id = (known after apply) 2025-10-08 14:54:57.213365 | orchestrator | 14:54:57.211 STDOUT terraform:  + image_id = (known after apply) 2025-10-08 14:54:57.213368 | orchestrator | 14:54:57.211 STDOUT terraform:  + metadata = (known after apply) 2025-10-08 14:54:57.213372 | orchestrator | 14:54:57.211 STDOUT terraform:  + name = "testbed-volume-4-node-base" 2025-10-08 14:54:57.213376 | orchestrator | 14:54:57.211 STDOUT terraform:  + region = (known after apply) 2025-10-08 14:54:57.213380 | orchestrator | 14:54:57.211 STDOUT terraform:  + size = 80 2025-10-08 14:54:57.213383 | orchestrator | 14:54:57.211 STDOUT terraform:  + volume_retype_policy = "never" 2025-10-08 14:54:57.213387 | orchestrator | 14:54:57.211 STDOUT terraform:  + volume_type = "ssd" 2025-10-08 14:54:57.213391 | orchestrator | 14:54:57.211 STDOUT terraform:  } 2025-10-08 14:54:57.213395 | orchestrator | 14:54:57.211 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2025-10-08 14:54:57.213401 | orchestrator | 14:54:57.211 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-10-08 14:54:57.213405 | orchestrator | 14:54:57.211 STDOUT terraform:  + attachment = (known after apply) 2025-10-08 14:54:57.213408 | orchestrator | 14:54:57.211 STDOUT terraform:  + availability_zone = "nova" 2025-10-08 14:54:57.213412 | orchestrator | 14:54:57.211 STDOUT terraform:  + id = (known after apply) 2025-10-08 14:54:57.213416 | orchestrator | 14:54:57.211 STDOUT terraform:  + image_id = (known after apply) 2025-10-08 14:54:57.213420 | orchestrator | 14:54:57.211 STDOUT terraform:  + metadata = (known after apply) 2025-10-08 14:54:57.213423 | orchestrator | 14:54:57.211 STDOUT terraform:  + name = "testbed-volume-5-node-base" 2025-10-08 14:54:57.213427 | orchestrator | 14:54:57.211 STDOUT terraform:  + region = (known after apply) 2025-10-08 14:54:57.213431 | orchestrator | 14:54:57.211 STDOUT terraform:  + size = 80 2025-10-08 14:54:57.213435 | orchestrator | 14:54:57.211 STDOUT terraform:  + volume_retype_policy = "never" 2025-10-08 14:54:57.213438 | orchestrator | 14:54:57.211 STDOUT terraform:  + volume_type = "ssd" 2025-10-08 14:54:57.213442 | orchestrator | 14:54:57.211 STDOUT terraform:  } 2025-10-08 14:54:57.213446 | orchestrator | 14:54:57.212 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[0] will be created 2025-10-08 14:54:57.213450 | orchestrator | 14:54:57.212 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-10-08 14:54:57.213454 | orchestrator | 14:54:57.212 STDOUT terraform:  + attachment = (known after apply) 2025-10-08 14:54:57.213457 | orchestrator | 14:54:57.212 STDOUT terraform:  + availability_zone = "nova" 2025-10-08 14:54:57.213461 | orchestrator | 14:54:57.212 STDOUT terraform:  + id = (known after apply) 2025-10-08 14:54:57.213465 | orchestrator | 14:54:57.212 STDOUT terraform:  + metadata = (known after apply) 2025-10-08 14:54:57.213468 | orchestrator | 14:54:57.212 STDOUT terraform:  + name = "testbed-volume-0-node-3" 2025-10-08 14:54:57.213475 | orchestrator | 14:54:57.212 STDOUT terraform:  + region = (known after apply) 2025-10-08 14:54:57.213479 | orchestrator | 14:54:57.212 STDOUT terraform:  + size = 20 2025-10-08 14:54:57.213482 | orchestrator | 14:54:57.212 STDOUT terraform:  + volume_retype_policy = "never" 2025-10-08 14:54:57.213491 | orchestrator | 14:54:57.212 STDOUT terraform:  + volume_type = "ssd" 2025-10-08 14:54:57.213495 | orchestrator | 14:54:57.212 STDOUT terraform:  } 2025-10-08 14:54:57.213499 | orchestrator | 14:54:57.212 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[1] will be created 2025-10-08 14:54:57.213503 | orchestrator | 14:54:57.212 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-10-08 14:54:57.213506 | orchestrator | 14:54:57.212 STDOUT terraform:  + attachment = (known after apply) 2025-10-08 14:54:57.213510 | orchestrator | 14:54:57.212 STDOUT terraform:  + availability_zone = "nova" 2025-10-08 14:54:57.213516 | orchestrator | 14:54:57.212 STDOUT terraform:  + id = (known after apply) 2025-10-08 14:54:57.213520 | orchestrator | 14:54:57.212 STDOUT terraform:  + metadata = (known after apply) 2025-10-08 14:54:57.213524 | orchestrator | 14:54:57.212 STDOUT terraform:  + name = "testbed-volume-1-node-4" 2025-10-08 14:54:57.213528 | orchestrator | 14:54:57.212 STDOUT terraform:  + region = (known after apply) 2025-10-08 14:54:57.213531 | orchestrator | 14:54:57.212 STDOUT terraform:  + size = 20 2025-10-08 14:54:57.213535 | orchestrator | 14:54:57.212 STDOUT terraform:  + volume_retype_policy = "never" 2025-10-08 14:54:57.213550 | orchestrator | 14:54:57.212 STDOUT terraform:  + volume_type = "ssd" 2025-10-08 14:54:57.213554 | orchestrator | 14:54:57.212 STDOUT terraform:  } 2025-10-08 14:54:57.213557 | orchestrator | 14:54:57.212 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[2] will be created 2025-10-08 14:54:57.213561 | orchestrator | 14:54:57.212 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-10-08 14:54:57.213565 | orchestrator | 14:54:57.212 STDOUT terraform:  + attachment = (known after apply) 2025-10-08 14:54:57.213569 | orchestrator | 14:54:57.212 STDOUT terraform:  + availability_zone = "nova" 2025-10-08 14:54:57.213573 | orchestrator | 14:54:57.212 STDOUT terraform:  + id = (known after apply) 2025-10-08 14:54:57.213577 | orchestrator | 14:54:57.212 STDOUT terraform:  + metadata = (known after apply) 2025-10-08 14:54:57.213580 | orchestrator | 14:54:57.213 STDOUT terraform:  + name = "testbed-volume-2-node-5" 2025-10-08 14:54:57.213584 | orchestrator | 14:54:57.213 STDOUT terraform:  + region = (known after apply) 2025-10-08 14:54:57.213588 | orchestrator | 14:54:57.213 STDOUT terraform:  + size = 20 2025-10-08 14:54:57.213592 | orchestrator | 14:54:57.213 STDOUT terraform:  + volume_retype_policy = "never" 2025-10-08 14:54:57.213595 | orchestrator | 14:54:57.213 STDOUT terraform:  + volume_type = "ssd" 2025-10-08 14:54:57.213599 | orchestrator | 14:54:57.213 STDOUT terraform:  } 2025-10-08 14:54:57.213603 | orchestrator | 14:54:57.213 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[3] will be created 2025-10-08 14:54:57.213610 | orchestrator | 14:54:57.213 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-10-08 14:54:57.213614 | orchestrator | 14:54:57.213 STDOUT terraform:  + attachment = (known after apply) 2025-10-08 14:54:57.213617 | orchestrator | 14:54:57.213 STDOUT terraform:  + availability_zone = "nova" 2025-10-08 14:54:57.213621 | orchestrator | 14:54:57.213 STDOUT terraform:  + id = (known after apply) 2025-10-08 14:54:57.213625 | orchestrator | 14:54:57.213 STDOUT terraform:  + metadata = (known after apply) 2025-10-08 14:54:57.213629 | orchestrator | 14:54:57.213 STDOUT terraform:  + name = "testbed-volume-3-node-3" 2025-10-08 14:54:57.213632 | orchestrator | 14:54:57.213 STDOUT terraform:  + region = (known after apply) 2025-10-08 14:54:57.213636 | orchestrator | 14:54:57.213 STDOUT terraform:  + size = 20 2025-10-08 14:54:57.213642 | orchestrator | 14:54:57.213 STDOUT terraform:  + volume_retype_policy = "never" 2025-10-08 14:54:57.213646 | orchestrator | 14:54:57.213 STDOUT terraform:  + volume_type = "ssd" 2025-10-08 14:54:57.213649 | orchestrator | 14:54:57.213 STDOUT terraform:  } 2025-10-08 14:54:57.213657 | orchestrator | 14:54:57.213 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[4] will be created 2025-10-08 14:54:57.213967 | orchestrator | 14:54:57.213 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-10-08 14:54:57.213975 | orchestrator | 14:54:57.213 STDOUT terraform:  + attachment = (known after apply) 2025-10-08 14:54:57.213979 | orchestrator | 14:54:57.213 STDOUT terraform:  + availability_zone = "nova" 2025-10-08 14:54:57.213983 | orchestrator | 14:54:57.213 STDOUT terraform:  + id = (known after apply) 2025-10-08 14:54:57.213986 | orchestrator | 14:54:57.213 STDOUT terraform:  + metadata = (known after apply) 2025-10-08 14:54:57.213990 | orchestrator | 14:54:57.213 STDOUT terraform:  + name = "testbed-volume-4-node-4" 2025-10-08 14:54:57.213994 | orchestrator | 14:54:57.213 STDOUT terraform:  + region = (known after apply) 2025-10-08 14:54:57.213998 | orchestrator | 14:54:57.213 STDOUT terraform:  + size = 20 2025-10-08 14:54:57.214002 | orchestrator | 14:54:57.213 STDOUT terraform:  + volume_retype_policy = "never" 2025-10-08 14:54:57.214005 | orchestrator | 14:54:57.213 STDOUT terraform:  + volume_type = "ssd" 2025-10-08 14:54:57.214009 | orchestrator | 14:54:57.213 STDOUT terraform:  } 2025-10-08 14:54:57.218253 | orchestrator | 14:54:57.218 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[5] will be created 2025-10-08 14:54:57.218309 | orchestrator | 14:54:57.218 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-10-08 14:54:57.218321 | orchestrator | 14:54:57.218 STDOUT terraform:  + attachment = (known after apply) 2025-10-08 14:54:57.219510 | orchestrator | 14:54:57.218 STDOUT terraform:  + availability_zone = "nova" 2025-10-08 14:54:57.219515 | orchestrator | 14:54:57.218 STDOUT terraform:  + id = (known after apply) 2025-10-08 14:54:57.219519 | orchestrator | 14:54:57.218 STDOUT terraform:  + metadata = (known after apply) 2025-10-08 14:54:57.219531 | orchestrator | 14:54:57.218 STDOUT terraform:  + name = "testbed-volume-5-node-5" 2025-10-08 14:54:57.219535 | orchestrator | 14:54:57.218 STDOUT terraform:  + region = (known after apply) 2025-10-08 14:54:57.219547 | orchestrator | 14:54:57.218 STDOUT terraform:  + size = 20 2025-10-08 14:54:57.219551 | orchestrator | 14:54:57.218 STDOUT terraform:  + volume_retype_policy = "never" 2025-10-08 14:54:57.219555 | orchestrator | 14:54:57.218 STDOUT terraform:  + volume_type = "ssd" 2025-10-08 14:54:57.219559 | orchestrator | 14:54:57.218 STDOUT terraform:  } 2025-10-08 14:54:57.219563 | orchestrator | 14:54:57.218 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[6] will be created 2025-10-08 14:54:57.219567 | orchestrator | 14:54:57.218 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-10-08 14:54:57.219571 | orchestrator | 14:54:57.218 STDOUT terraform:  + attachment = (known after apply) 2025-10-08 14:54:57.219574 | orchestrator | 14:54:57.218 STDOUT terraform:  + availability_zone = "nova" 2025-10-08 14:54:57.219578 | orchestrator | 14:54:57.218 STDOUT terraform:  + id = (known after apply) 2025-10-08 14:54:57.219582 | orchestrator | 14:54:57.218 STDOUT terraform:  + metadata = (known after apply) 2025-10-08 14:54:57.219586 | orchestrator | 14:54:57.218 STDOUT terraform:  + name = "testbed-volume-6-node-3" 2025-10-08 14:54:57.219590 | orchestrator | 14:54:57.218 STDOUT terraform:  + region = (known after apply) 2025-10-08 14:54:57.219593 | orchestrator | 14:54:57.218 STDOUT terraform:  + size = 20 2025-10-08 14:54:57.219597 | orchestrator | 14:54:57.218 STDOUT terraform:  + volume_retype_policy = "never" 2025-10-08 14:54:57.219601 | orchestrator | 14:54:57.218 STDOUT terraform:  + volume_type = "ssd" 2025-10-08 14:54:57.219605 | orchestrator | 14:54:57.218 STDOUT terraform:  } 2025-10-08 14:54:57.219612 | orchestrator | 14:54:57.218 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[7] will be created 2025-10-08 14:54:57.219616 | orchestrator | 14:54:57.218 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-10-08 14:54:57.219620 | orchestrator | 14:54:57.218 STDOUT terraform:  + attachment = (known after apply) 2025-10-08 14:54:57.219624 | orchestrator | 14:54:57.219 STDOUT terraform:  + availability_zone = "nova" 2025-10-08 14:54:57.219628 | orchestrator | 14:54:57.219 STDOUT terraform:  + id = (known after apply) 2025-10-08 14:54:57.219632 | orchestrator | 14:54:57.219 STDOUT terraform:  + metadata = (known after apply) 2025-10-08 14:54:57.219635 | orchestrator | 14:54:57.219 STDOUT terraform:  + name = "testbed-volume-7-node-4" 2025-10-08 14:54:57.219639 | orchestrator | 14:54:57.219 STDOUT terraform:  + region = (known after apply) 2025-10-08 14:54:57.219643 | orchestrator | 14:54:57.219 STDOUT terraform:  + size = 20 2025-10-08 14:54:57.219647 | orchestrator | 14:54:57.219 STDOUT terraform:  + volume_retype_policy = "never" 2025-10-08 14:54:57.219651 | orchestrator | 14:54:57.219 STDOUT terraform:  + volume_type = "ssd" 2025-10-08 14:54:57.219654 | orchestrator | 14:54:57.219 STDOUT terraform:  } 2025-10-08 14:54:57.219662 | orchestrator | 14:54:57.219 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[8] will be created 2025-10-08 14:54:57.219666 | orchestrator | 14:54:57.219 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-10-08 14:54:57.219670 | orchestrator | 14:54:57.219 STDOUT terraform:  + attachment = (known after apply) 2025-10-08 14:54:57.219674 | orchestrator | 14:54:57.219 STDOUT terraform:  + availability_zone = "nova" 2025-10-08 14:54:57.219684 | orchestrator | 14:54:57.219 STDOUT terraform:  + id = (known after apply) 2025-10-08 14:54:57.219688 | orchestrator | 14:54:57.219 STDOUT terraform:  + metadata = (known after apply) 2025-10-08 14:54:57.219692 | orchestrator | 14:54:57.219 STDOUT terraform:  + name = "testbed-volume-8-node-5" 2025-10-08 14:54:57.219695 | orchestrator | 14:54:57.219 STDOUT terraform:  + region = (known after apply) 2025-10-08 14:54:57.219699 | orchestrator | 14:54:57.219 STDOUT terraform:  + size = 20 2025-10-08 14:54:57.219703 | orchestrator | 14:54:57.219 STDOUT terraform:  + volume_retype_policy = "never" 2025-10-08 14:54:57.219707 | orchestrator | 14:54:57.219 STDOUT terraform:  + volume_type = "ssd" 2025-10-08 14:54:57.219711 | orchestrator | 14:54:57.219 STDOUT terraform:  } 2025-10-08 14:54:57.219714 | orchestrator | 14:54:57.219 STDOUT terraform:  # openstack_compute_instance_v2.manager_server will be created 2025-10-08 14:54:57.219718 | orchestrator | 14:54:57.219 STDOUT terraform:  + resource "openstack_compute_instance_v2" "manager_server" { 2025-10-08 14:54:57.219722 | orchestrator | 14:54:57.219 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-10-08 14:54:57.219726 | orchestrator | 14:54:57.219 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-10-08 14:54:57.219731 | orchestrator | 14:54:57.219 STDOUT terraform:  + all_metadata = (known after apply) 2025-10-08 14:54:57.219737 | orchestrator | 14:54:57.219 STDOUT terraform:  + all_tags = (known after apply) 2025-10-08 14:54:57.219805 | orchestrator | 14:54:57.219 STDOUT terraform:  + availability_zone = "nova" 2025-10-08 14:54:57.219813 | orchestrator | 14:54:57.219 STDOUT terraform:  + config_drive = true 2025-10-08 14:54:57.219818 | orchestrator | 14:54:57.219 STDOUT terraform:  + created = (known after apply) 2025-10-08 14:54:57.219882 | orchestrator | 14:54:57.219 STDOUT terraform:  + flavor_id = (known after apply) 2025-10-08 14:54:57.219890 | orchestrator | 14:54:57.219 STDOUT terraform:  + flavor_name = "OSISM-4V-16" 2025-10-08 14:54:57.219896 | orchestrator | 14:54:57.219 STDOUT terraform:  + force_delete = false 2025-10-08 14:54:57.219921 | orchestrator | 14:54:57.219 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-10-08 14:54:57.219971 | orchestrator | 14:54:57.219 STDOUT terraform:  + id = (known after apply) 2025-10-08 14:54:57.219978 | orchestrator | 14:54:57.219 STDOUT terraform:  + image_id = (known after apply) 2025-10-08 14:54:57.220065 | orchestrator | 14:54:57.219 STDOUT terraform:  + image_name = (known after apply) 2025-10-08 14:54:57.220076 | orchestrator | 14:54:57.220 STDOUT terraform:  + key_pair = "testbed" 2025-10-08 14:54:57.220083 | orchestrator | 14:54:57.220 STDOUT terraform:  + name = "testbed-manager" 2025-10-08 14:54:57.220088 | orchestrator | 14:54:57.220 STDOUT terraform:  + power_state = "active" 2025-10-08 14:54:57.220138 | orchestrator | 14:54:57.220 STDOUT terraform:  + region = (known after apply) 2025-10-08 14:54:57.220145 | orchestrator | 14:54:57.220 STDOUT terraform:  + security_groups = (known after apply) 2025-10-08 14:54:57.220179 | orchestrator | 14:54:57.220 STDOUT terraform:  + stop_before_destroy = false 2025-10-08 14:54:57.220217 | orchestrator | 14:54:57.220 STDOUT terraform:  + updated = (known after apply) 2025-10-08 14:54:57.220263 | orchestrator | 14:54:57.220 STDOUT terraform:  + user_data = (sensitive value) 2025-10-08 14:54:57.220269 | orchestrator | 14:54:57.220 STDOUT terraform:  + block_device { 2025-10-08 14:54:57.220307 | orchestrator | 14:54:57.220 STDOUT terraform:  + boot_index = 0 2025-10-08 14:54:57.220313 | orchestrator | 14:54:57.220 STDOUT terraform:  + delete_on_termination = false 2025-10-08 14:54:57.220342 | orchestrator | 14:54:57.220 STDOUT terraform:  + destination_type = "volume" 2025-10-08 14:54:57.220394 | orchestrator | 14:54:57.220 STDOUT terraform:  + multiattach = false 2025-10-08 14:54:57.220402 | orchestrator | 14:54:57.220 STDOUT terraform:  + source_type = "volume" 2025-10-08 14:54:57.220472 | orchestrator | 14:54:57.220 STDOUT terraform:  + uuid = (known after apply) 2025-10-08 14:54:57.220477 | orchestrator | 14:54:57.220 STDOUT terraform:  } 2025-10-08 14:54:57.220481 | orchestrator | 14:54:57.220 STDOUT terraform:  + network { 2025-10-08 14:54:57.220485 | orchestrator | 14:54:57.220 STDOUT terraform:  + access_network = false 2025-10-08 14:54:57.220490 | orchestrator | 14:54:57.220 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-10-08 14:54:57.220566 | orchestrator | 14:54:57.220 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-10-08 14:54:57.220573 | orchestrator | 14:54:57.220 STDOUT terraform:  + mac = (known after apply) 2025-10-08 14:54:57.220579 | orchestrator | 14:54:57.220 STDOUT terraform:  + name = (known after apply) 2025-10-08 14:54:57.220625 | orchestrator | 14:54:57.220 STDOUT terraform:  + port = (known after apply) 2025-10-08 14:54:57.220632 | orchestrator | 14:54:57.220 STDOUT terraform:  + uuid = (known after apply) 2025-10-08 14:54:57.220664 | orchestrator | 14:54:57.220 STDOUT terraform:  } 2025-10-08 14:54:57.220672 | orchestrator | 14:54:57.220 STDOUT terraform:  } 2025-10-08 14:54:57.220702 | orchestrator | 14:54:57.220 STDOUT terraform:  # openstack_compute_instance_v2.node_server[0] will be created 2025-10-08 14:54:57.220797 | orchestrator | 14:54:57.220 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-10-08 14:54:57.220805 | orchestrator | 14:54:57.220 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-10-08 14:54:57.220811 | orchestrator | 14:54:57.220 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-10-08 14:54:57.220851 | orchestrator | 14:54:57.220 STDOUT terraform:  + all_metadata = (known after apply) 2025-10-08 14:54:57.220862 | orchestrator | 14:54:57.220 STDOUT terraform:  + all_tags = (known after apply) 2025-10-08 14:54:57.220961 | orchestrator | 14:54:57.220 STDOUT terraform:  + availability_zone = "nova" 2025-10-08 14:54:57.220969 | orchestrator | 14:54:57.220 STDOUT terraform:  + config_drive = true 2025-10-08 14:54:57.220973 | orchestrator | 14:54:57.220 STDOUT terraform:  + created = (known after apply) 2025-10-08 14:54:57.220978 | orchestrator | 14:54:57.220 STDOUT terraform:  + flavor_id = (known after apply) 2025-10-08 14:54:57.220983 | orchestrator | 14:54:57.220 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-10-08 14:54:57.221086 | orchestrator | 14:54:57.220 STDOUT terraform:  + force_delete = false 2025-10-08 14:54:57.221091 | orchestrator | 14:54:57.221 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-10-08 14:54:57.221095 | orchestrator | 14:54:57.221 STDOUT terraform:  + id = (known after apply) 2025-10-08 14:54:57.221100 | orchestrator | 14:54:57.221 STDOUT terraform:  + image_id = (known after apply) 2025-10-08 14:54:57.221133 | orchestrator | 14:54:57.221 STDOUT terraform:  + image_name = (known after apply) 2025-10-08 14:54:57.221184 | orchestrator | 14:54:57.221 STDOUT terraform:  + key_pair = "testbed" 2025-10-08 14:54:57.221189 | orchestrator | 14:54:57.221 STDOUT terraform:  + name = "testbed-node-0" 2025-10-08 14:54:57.221194 | orchestrator | 14:54:57.221 STDOUT terraform:  + power_state = "active" 2025-10-08 14:54:57.221240 | orchestrator | 14:54:57.221 STDOUT terraform:  + region = (known after apply) 2025-10-08 14:54:57.221295 | orchestrator | 14:54:57.221 STDOUT terraform:  + security_groups = (known after apply) 2025-10-08 14:54:57.221300 | orchestrator | 14:54:57.221 STDOUT terraform:  + stop_before_destroy = false 2025-10-08 14:54:57.221306 | orchestrator | 14:54:57.221 STDOUT terraform:  + updated = (known after apply) 2025-10-08 14:54:57.221365 | orchestrator | 14:54:57.221 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-10-08 14:54:57.221371 | orchestrator | 14:54:57.221 STDOUT terraform:  + block_device { 2025-10-08 14:54:57.221413 | orchestrator | 14:54:57.221 STDOUT terraform:  + boot_index = 0 2025-10-08 14:54:57.221418 | orchestrator | 14:54:57.221 STDOUT terraform:  + delete_on_termination = false 2025-10-08 14:54:57.221473 | orchestrator | 14:54:57.221 STDOUT terraform:  + destination_type = "volume" 2025-10-08 14:54:57.221482 | orchestrator | 14:54:57.221 STDOUT terraform:  + multiattach = false 2025-10-08 14:54:57.221487 | orchestrator | 14:54:57.221 STDOUT terraform:  + source_type = "volume" 2025-10-08 14:54:57.221566 | orchestrator | 14:54:57.221 STDOUT terraform:  + uuid = (known after apply) 2025-10-08 14:54:57.221571 | orchestrator | 14:54:57.221 STDOUT terraform:  } 2025-10-08 14:54:57.221575 | orchestrator | 14:54:57.221 STDOUT terraform:  + network { 2025-10-08 14:54:57.221580 | orchestrator | 14:54:57.221 STDOUT terraform:  + access_network = false 2025-10-08 14:54:57.221625 | orchestrator | 14:54:57.221 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-10-08 14:54:57.221630 | orchestrator | 14:54:57.221 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-10-08 14:54:57.221678 | orchestrator | 14:54:57.221 STDOUT terraform:  + mac = (known after apply) 2025-10-08 14:54:57.221684 | orchestrator | 14:54:57.221 STDOUT terraform:  + name = (known after apply) 2025-10-08 14:54:57.221710 | orchestrator | 14:54:57.221 STDOUT terraform:  + port = (known after apply) 2025-10-08 14:54:57.221750 | orchestrator | 14:54:57.221 STDOUT terraform:  + uuid = (known after apply) 2025-10-08 14:54:57.221759 | orchestrator | 14:54:57.221 STDOUT terraform:  } 2025-10-08 14:54:57.221763 | orchestrator | 14:54:57.221 STDOUT terraform:  } 2025-10-08 14:54:57.221830 | orchestrator | 14:54:57.221 STDOUT terraform:  # openstack_compute_instance_v2.node_server[1] will be created 2025-10-08 14:54:57.221837 | orchestrator | 14:54:57.221 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-10-08 14:54:57.221898 | orchestrator | 14:54:57.221 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-10-08 14:54:57.221907 | orchestrator | 14:54:57.221 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-10-08 14:54:57.221938 | orchestrator | 14:54:57.221 STDOUT terraform:  + all_metadata = (known after apply) 2025-10-08 14:54:57.221985 | orchestrator | 14:54:57.221 STDOUT terraform:  + all_tags = (known after apply) 2025-10-08 14:54:57.221994 | orchestrator | 14:54:57.221 STDOUT terraform:  + availability_zone = "nova" 2025-10-08 14:54:57.221999 | orchestrator | 14:54:57.221 STDOUT terraform:  + config_drive = true 2025-10-08 14:54:57.222816 | orchestrator | 14:54:57.222 STDOUT terraform:  + created = (known after apply) 2025-10-08 14:54:57.222825 | orchestrator | 14:54:57.222 STDOUT terraform:  + flavor_id = (known after apply) 2025-10-08 14:54:57.222855 | orchestrator | 14:54:57.222 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-10-08 14:54:57.222917 | orchestrator | 14:54:57.222 STDOUT terraform:  + force_delete = false 2025-10-08 14:54:57.222925 | orchestrator | 14:54:57.222 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-10-08 14:54:57.222931 | orchestrator | 14:54:57.222 STDOUT terraform:  + id = (known after apply) 2025-10-08 14:54:57.222988 | orchestrator | 14:54:57.222 STDOUT terraform:  + image_id = (known after apply) 2025-10-08 14:54:57.222998 | orchestrator | 14:54:57.222 STDOUT terraform:  + image_name = (known after apply) 2025-10-08 14:54:57.223037 | orchestrator | 14:54:57.222 STDOUT terraform:  + key_pair = "testbed" 2025-10-08 14:54:57.223077 | orchestrator | 14:54:57.223 STDOUT terraform:  + name = "testbed-node-1" 2025-10-08 14:54:57.223082 | orchestrator | 14:54:57.223 STDOUT terraform:  + power_state = "active" 2025-10-08 14:54:57.223128 | orchestrator | 14:54:57.223 STDOUT terraform:  + region = (known after apply) 2025-10-08 14:54:57.223134 | orchestrator | 14:54:57.223 STDOUT terraform:  + security_groups = (known after apply) 2025-10-08 14:54:57.223176 | orchestrator | 14:54:57.223 STDOUT terraform:  + stop_before_destroy = false 2025-10-08 14:54:57.223246 | orchestrator | 14:54:57.223 STDOUT terraform:  + updated = (known after apply) 2025-10-08 14:54:57.223255 | orchestrator | 14:54:57.223 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-10-08 14:54:57.223260 | orchestrator | 14:54:57.223 STDOUT terraform:  + block_device { 2025-10-08 14:54:57.223310 | orchestrator | 14:54:57.223 STDOUT terraform:  + boot_index = 0 2025-10-08 14:54:57.223315 | orchestrator | 14:54:57.223 STDOUT terraform:  + delete_on_termination = false 2025-10-08 14:54:57.223320 | orchestrator | 14:54:57.223 STDOUT terraform:  + destination_type = "volume" 2025-10-08 14:54:57.223370 | orchestrator | 14:54:57.223 STDOUT terraform:  + multiattach = false 2025-10-08 14:54:57.223377 | orchestrator | 14:54:57.223 STDOUT terraform:  + source_type = "volume" 2025-10-08 14:54:57.223443 | orchestrator | 14:54:57.223 STDOUT terraform:  + uuid = (known after apply) 2025-10-08 14:54:57.223448 | orchestrator | 14:54:57.223 STDOUT terraform:  } 2025-10-08 14:54:57.223452 | orchestrator | 14:54:57.223 STDOUT terraform:  + network { 2025-10-08 14:54:57.223457 | orchestrator | 14:54:57.223 STDOUT terraform:  + access_network = false 2025-10-08 14:54:57.223509 | orchestrator | 14:54:57.223 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-10-08 14:54:57.223514 | orchestrator | 14:54:57.223 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-10-08 14:54:57.223615 | orchestrator | 14:54:57.223 STDOUT terraform:  + mac = (known after apply) 2025-10-08 14:54:57.223624 | orchestrator | 14:54:57.223 STDOUT terraform:  + name = (known after apply) 2025-10-08 14:54:57.223631 | orchestrator | 14:54:57.223 STDOUT terraform:  + port = (known after apply) 2025-10-08 14:54:57.223637 | orchestrator | 14:54:57.223 STDOUT terraform:  + uuid = (known after apply) 2025-10-08 14:54:57.223641 | orchestrator | 14:54:57.223 STDOUT terraform:  } 2025-10-08 14:54:57.223646 | orchestrator | 14:54:57.223 STDOUT terraform:  } 2025-10-08 14:54:57.223698 | orchestrator | 14:54:57.223 STDOUT terraform:  # openstack_compute_instance_v2.node_server[2] will be created 2025-10-08 14:54:57.223745 | orchestrator | 14:54:57.223 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-10-08 14:54:57.223754 | orchestrator | 14:54:57.223 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-10-08 14:54:57.223799 | orchestrator | 14:54:57.223 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-10-08 14:54:57.223919 | orchestrator | 14:54:57.223 STDOUT terraform:  + all_metadata = (known after apply) 2025-10-08 14:54:57.223924 | orchestrator | 14:54:57.223 STDOUT terraform:  + all_tags = (known after apply) 2025-10-08 14:54:57.223928 | orchestrator | 14:54:57.223 STDOUT terraform:  + availability_zone = "nova" 2025-10-08 14:54:57.223932 | orchestrator | 14:54:57.223 STDOUT terraform:  + config_drive = true 2025-10-08 14:54:57.223936 | orchestrator | 14:54:57.223 STDOUT terraform:  + created = (known after apply) 2025-10-08 14:54:57.223985 | orchestrator | 14:54:57.223 STDOUT terraform:  + flavor_id = (known after apply) 2025-10-08 14:54:57.223991 | orchestrator | 14:54:57.223 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-10-08 14:54:57.224000 | orchestrator | 14:54:57.223 STDOUT terraform:  + force_delete = false 2025-10-08 14:54:57.224046 | orchestrator | 14:54:57.223 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-10-08 14:54:57.224056 | orchestrator | 14:54:57.224 STDOUT terraform:  + id = (known after apply) 2025-10-08 14:54:57.224115 | orchestrator | 14:54:57.224 STDOUT terraform:  + image_id = (known after apply) 2025-10-08 14:54:57.224122 | orchestrator | 14:54:57.224 STDOUT terraform:  + image_name = (known after apply) 2025-10-08 14:54:57.224164 | orchestrator | 14:54:57.224 STDOUT terraform:  + key_pair = "testbed" 2025-10-08 14:54:57.224171 | orchestrator | 14:54:57.224 STDOUT terraform:  + name = "testbed-node-2" 2025-10-08 14:54:57.224198 | orchestrator | 14:54:57.224 STDOUT terraform:  + power_state = "active" 2025-10-08 14:54:57.224229 | orchestrator | 14:54:57.224 STDOUT terraform:  + region = (known after apply) 2025-10-08 14:54:57.224269 | orchestrator | 14:54:57.224 STDOUT terraform:  + security_groups = (known after apply) 2025-10-08 14:54:57.224282 | orchestrator | 14:54:57.224 STDOUT terraform:  + stop_before_destroy = false 2025-10-08 14:54:57.224311 | orchestrator | 14:54:57.224 STDOUT terraform:  + updated = (known after apply) 2025-10-08 14:54:57.224366 | orchestrator | 14:54:57.224 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-10-08 14:54:57.224373 | orchestrator | 14:54:57.224 STDOUT terraform:  + block_device { 2025-10-08 14:54:57.224443 | orchestrator | 14:54:57.224 STDOUT terraform:  + boot_index = 0 2025-10-08 14:54:57.224448 | orchestrator | 14:54:57.224 STDOUT terraform:  + delete_on_termination = false 2025-10-08 14:54:57.224454 | orchestrator | 14:54:57.224 STDOUT terraform:  + destination_type = "volume" 2025-10-08 14:54:57.224509 | orchestrator | 14:54:57.224 STDOUT terraform:  + multiattach = false 2025-10-08 14:54:57.224514 | orchestrator | 14:54:57.224 STDOUT terraform:  + source_type = "volume" 2025-10-08 14:54:57.224550 | orchestrator | 14:54:57.224 STDOUT terraform:  + uuid = (known after apply) 2025-10-08 14:54:57.224555 | orchestrator | 14:54:57.224 STDOUT terraform:  } 2025-10-08 14:54:57.224560 | orchestrator | 14:54:57.224 STDOUT terraform:  + network { 2025-10-08 14:54:57.224667 | orchestrator | 14:54:57.224 STDOUT terraform:  + access_network = false 2025-10-08 14:54:57.224672 | orchestrator | 14:54:57.224 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-10-08 14:54:57.224676 | orchestrator | 14:54:57.224 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-10-08 14:54:57.224679 | orchestrator | 14:54:57.224 STDOUT terraform:  + mac = (known after apply) 2025-10-08 14:54:57.224685 | orchestrator | 14:54:57.224 STDOUT terraform:  + name = (known after apply) 2025-10-08 14:54:57.224714 | orchestrator | 14:54:57.224 STDOUT terraform:  + port = (known after apply) 2025-10-08 14:54:57.224750 | orchestrator | 14:54:57.224 STDOUT terraform:  + uuid = (known after apply) 2025-10-08 14:54:57.224758 | orchestrator | 14:54:57.224 STDOUT terraform:  } 2025-10-08 14:54:57.224763 | orchestrator | 14:54:57.224 STDOUT terraform:  } 2025-10-08 14:54:57.224808 | orchestrator | 14:54:57.224 STDOUT terraform:  # openstack_compute_instance_v2.node_server[3] will be created 2025-10-08 14:54:57.224852 | orchestrator | 14:54:57.224 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-10-08 14:54:57.224882 | orchestrator | 14:54:57.224 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-10-08 14:54:57.224949 | orchestrator | 14:54:57.224 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-10-08 14:54:57.224955 | orchestrator | 14:54:57.224 STDOUT terraform:  + all_metadata = (known after apply) 2025-10-08 14:54:57.224984 | orchestrator | 14:54:57.224 STDOUT terraform:  + all_tags = (known after apply) 2025-10-08 14:54:57.224992 | orchestrator | 14:54:57.224 STDOUT terraform:  + availability_zone = "nova" 2025-10-08 14:54:57.225033 | orchestrator | 14:54:57.224 STDOUT terraform:  + config_drive = true 2025-10-08 14:54:57.225040 | orchestrator | 14:54:57.225 STDOUT terraform:  + created = (known after apply) 2025-10-08 14:54:57.225076 | orchestrator | 14:54:57.225 STDOUT terraform:  + flavor_id = (known after apply) 2025-10-08 14:54:57.225131 | orchestrator | 14:54:57.225 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-10-08 14:54:57.225139 | orchestrator | 14:54:57.225 STDOUT terraform:  + force_delete = false 2025-10-08 14:54:57.225144 | orchestrator | 14:54:57.225 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-10-08 14:54:57.225208 | orchestrator | 14:54:57.225 STDOUT terraform:  + id = (known after apply) 2025-10-08 14:54:57.225215 | orchestrator | 14:54:57.225 STDOUT terraform:  + image_id = (known after apply) 2025-10-08 14:54:57.225254 | orchestrator | 14:54:57.225 STDOUT terraform:  + image_name = (known after apply) 2025-10-08 14:54:57.225261 | orchestrator | 14:54:57.225 STDOUT terraform:  + key_pair = "testbed" 2025-10-08 14:54:57.225301 | orchestrator | 14:54:57.225 STDOUT terraform:  + name = "testbed-node-3" 2025-10-08 14:54:57.225347 | orchestrator | 14:54:57.225 STDOUT terraform:  + power_state = "active" 2025-10-08 14:54:57.225352 | orchestrator | 14:54:57.225 STDOUT terraform:  + region = (known after apply) 2025-10-08 14:54:57.225405 | orchestrator | 14:54:57.225 STDOUT terraform:  + security_groups = (known after apply) 2025-10-08 14:54:57.225410 | orchestrator | 14:54:57.225 STDOUT terraform:  + stop_before_destroy = false 2025-10-08 14:54:57.225460 | orchestrator | 14:54:57.225 STDOUT terraform:  + updated = (known after apply) 2025-10-08 14:54:57.225470 | orchestrator | 14:54:57.225 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-10-08 14:54:57.225498 | orchestrator | 14:54:57.225 STDOUT terraform:  + block_device { 2025-10-08 14:54:57.225510 | orchestrator | 14:54:57.225 STDOUT terraform:  + boot_index = 0 2025-10-08 14:54:57.225566 | orchestrator | 14:54:57.225 STDOUT terraform:  + delete_on_termination = false 2025-10-08 14:54:57.225605 | orchestrator | 14:54:57.225 STDOUT terraform:  + destination_type = "volume" 2025-10-08 14:54:57.225611 | orchestrator | 14:54:57.225 STDOUT terraform:  + multiattach = false 2025-10-08 14:54:57.225655 | orchestrator | 14:54:57.225 STDOUT terraform:  + source_type = "volume" 2025-10-08 14:54:57.225669 | orchestrator | 14:54:57.225 STDOUT terraform:  + uuid = (known after apply) 2025-10-08 14:54:57.225675 | orchestrator | 14:54:57.225 STDOUT terraform:  } 2025-10-08 14:54:57.225697 | orchestrator | 14:54:57.225 STDOUT terraform:  + network { 2025-10-08 14:54:57.225709 | orchestrator | 14:54:57.225 STDOUT terraform:  + access_network = false 2025-10-08 14:54:57.225748 | orchestrator | 14:54:57.225 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-10-08 14:54:57.225773 | orchestrator | 14:54:57.225 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-10-08 14:54:57.225805 | orchestrator | 14:54:57.225 STDOUT terraform:  + mac = (known after apply) 2025-10-08 14:54:57.225849 | orchestrator | 14:54:57.225 STDOUT terraform:  + name = (known after apply) 2025-10-08 14:54:57.225856 | orchestrator | 14:54:57.225 STDOUT terraform:  + port = (known after apply) 2025-10-08 14:54:57.225918 | orchestrator | 14:54:57.225 STDOUT terraform:  + uuid = (known after apply) 2025-10-08 14:54:57.225923 | orchestrator | 14:54:57.225 STDOUT terraform:  } 2025-10-08 14:54:57.225927 | orchestrator | 14:54:57.225 STDOUT terraform:  } 2025-10-08 14:54:57.225975 | orchestrator | 14:54:57.225 STDOUT terraform:  # openstack_compute_instance_v2.node_server[4] will be created 2025-10-08 14:54:57.225981 | orchestrator | 14:54:57.225 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-10-08 14:54:57.226046 | orchestrator | 14:54:57.225 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-10-08 14:54:57.226086 | orchestrator | 14:54:57.226 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-10-08 14:54:57.226095 | orchestrator | 14:54:57.226 STDOUT terraform:  + all_metadata = (known after apply) 2025-10-08 14:54:57.226135 | orchestrator | 14:54:57.226 STDOUT terraform:  + all_tags = (known after apply) 2025-10-08 14:54:57.226145 | orchestrator | 14:54:57.226 STDOUT terraform:  + availability_zone = "nova" 2025-10-08 14:54:57.226170 | orchestrator | 14:54:57.226 STDOUT terraform:  + config_drive = true 2025-10-08 14:54:57.226218 | orchestrator | 14:54:57.226 STDOUT terraform:  + created = (known after apply) 2025-10-08 14:54:57.226271 | orchestrator | 14:54:57.226 STDOUT terraform:  + flavor_id = (known after apply) 2025-10-08 14:54:57.226276 | orchestrator | 14:54:57.226 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-10-08 14:54:57.226281 | orchestrator | 14:54:57.226 STDOUT terraform:  + force_delete = false 2025-10-08 14:54:57.226316 | orchestrator | 14:54:57.226 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-10-08 14:54:57.226357 | orchestrator | 14:54:57.226 STDOUT terraform:  + id = (known after apply) 2025-10-08 14:54:57.226385 | orchestrator | 14:54:57.226 STDOUT terraform:  + image_id = (known after apply) 2025-10-08 14:54:57.226436 | orchestrator | 14:54:57.226 STDOUT terraform:  + image_name = (known after apply) 2025-10-08 14:54:57.226443 | orchestrator | 14:54:57.226 STDOUT terraform:  + key_pair = "testbed" 2025-10-08 14:54:57.226471 | orchestrator | 14:54:57.226 STDOUT terraform:  + name = "testbed-node-4" 2025-10-08 14:54:57.226483 | orchestrator | 14:54:57.226 STDOUT terraform:  + power_state = "active" 2025-10-08 14:54:57.226626 | orchestrator | 14:54:57.226 STDOUT terraform:  + region = (known after apply) 2025-10-08 14:54:57.226634 | orchestrator | 14:54:57.226 STDOUT terraform:  + security_groups = (known after apply) 2025-10-08 14:54:57.226638 | orchestrator | 14:54:57.226 STDOUT terraform:  + stop_before_destroy = false 2025-10-08 14:54:57.226642 | orchestrator | 14:54:57.226 STDOUT terraform:  + updated = (known after apply) 2025-10-08 14:54:57.226667 | orchestrator | 14:54:57.226 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-10-08 14:54:57.226672 | orchestrator | 14:54:57.226 STDOUT terraform:  + block_device { 2025-10-08 14:54:57.226696 | orchestrator | 14:54:57.226 STDOUT terraform:  + boot_index = 0 2025-10-08 14:54:57.226724 | orchestrator | 14:54:57.226 STDOUT terraform:  + delete_on_termination = false 2025-10-08 14:54:57.226793 | orchestrator | 14:54:57.226 STDOUT terraform:  + destination_type = "volume" 2025-10-08 14:54:57.226801 | orchestrator | 14:54:57.226 STDOUT terraform:  + multiattach = false 2025-10-08 14:54:57.226805 | orchestrator | 14:54:57.226 STDOUT terraform:  + source_type = "volume" 2025-10-08 14:54:57.226826 | orchestrator | 14:54:57.226 STDOUT terraform:  + uuid = (known after apply) 2025-10-08 14:54:57.226838 | orchestrator | 14:54:57.226 STDOUT terraform:  } 2025-10-08 14:54:57.226843 | orchestrator | 14:54:57.226 STDOUT terraform:  + network { 2025-10-08 14:54:57.226882 | orchestrator | 14:54:57.226 STDOUT terraform:  + access_network = false 2025-10-08 14:54:57.226889 | orchestrator | 14:54:57.226 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-10-08 14:54:57.226920 | orchestrator | 14:54:57.226 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-10-08 14:54:57.226944 | orchestrator | 14:54:57.226 STDOUT terraform:  + mac = (known after apply) 2025-10-08 14:54:57.226994 | orchestrator | 14:54:57.226 STDOUT terraform:  + name = (known after apply) 2025-10-08 14:54:57.227001 | orchestrator | 14:54:57.226 STDOUT terraform:  + port = (known after apply) 2025-10-08 14:54:57.227031 | orchestrator | 14:54:57.226 STDOUT terraform:  + uuid = (known after apply) 2025-10-08 14:54:57.227037 | orchestrator | 14:54:57.227 STDOUT terraform:  } 2025-10-08 14:54:57.227042 | orchestrator | 14:54:57.227 STDOUT terraform:  } 2025-10-08 14:54:57.227093 | orchestrator | 14:54:57.227 STDOUT terraform:  # openstack_compute_instance_v2.node_server[5] will be created 2025-10-08 14:54:57.227128 | orchestrator | 14:54:57.227 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-10-08 14:54:57.227204 | orchestrator | 14:54:57.227 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-10-08 14:54:57.227209 | orchestrator | 14:54:57.227 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-10-08 14:54:57.227214 | orchestrator | 14:54:57.227 STDOUT terraform:  + all_metadata = (known after apply) 2025-10-08 14:54:57.227267 | orchestrator | 14:54:57.227 STDOUT terraform:  + all_tags = (known after apply) 2025-10-08 14:54:57.227272 | orchestrator | 14:54:57.227 STDOUT terraform:  + availability_zone = "nova" 2025-10-08 14:54:57.227277 | orchestrator | 14:54:57.227 STDOUT terraform:  + config_drive = true 2025-10-08 14:54:57.227325 | orchestrator | 14:54:57.227 STDOUT terraform:  + created = (known after apply) 2025-10-08 14:54:57.227379 | orchestrator | 14:54:57.227 STDOUT terraform:  + flavor_id = (known after apply) 2025-10-08 14:54:57.227384 | orchestrator | 14:54:57.227 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-10-08 14:54:57.227390 | orchestrator | 14:54:57.227 STDOUT terraform:  + force_delete = false 2025-10-08 14:54:57.227462 | orchestrator | 14:54:57.227 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-10-08 14:54:57.227467 | orchestrator | 14:54:57.227 STDOUT terraform:  + id = (known after apply) 2025-10-08 14:54:57.227490 | orchestrator | 14:54:57.227 STDOUT terraform:  + image_id = (known after apply) 2025-10-08 14:54:57.227561 | orchestrator | 14:54:57.227 STDOUT terraform:  + image_name = (known after apply) 2025-10-08 14:54:57.227567 | orchestrator | 14:54:57.227 STDOUT terraform:  + key_pair = "testbed" 2025-10-08 14:54:57.227590 | orchestrator | 14:54:57.227 STDOUT terraform:  + name = "testbed-node-5" 2025-10-08 14:54:57.227650 | orchestrator | 14:54:57.227 STDOUT terraform:  + power_state = "active" 2025-10-08 14:54:57.227656 | orchestrator | 14:54:57.227 STDOUT terraform:  + region = (known after apply) 2025-10-08 14:54:57.227694 | orchestrator | 14:54:57.227 STDOUT terraform:  + security_groups = (known after apply) 2025-10-08 14:54:57.227702 | orchestrator | 14:54:57.227 STDOUT terraform:  + stop_before_destroy = false 2025-10-08 14:54:57.227765 | orchestrator | 14:54:57.227 STDOUT terraform:  + updated = (known after apply) 2025-10-08 14:54:57.227772 | orchestrator | 14:54:57.227 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-10-08 14:54:57.227777 | orchestrator | 14:54:57.227 STDOUT terraform:  + block_device { 2025-10-08 14:54:57.227824 | orchestrator | 14:54:57.227 STDOUT terraform:  + boot_index = 0 2025-10-08 14:54:57.227831 | orchestrator | 14:54:57.227 STDOUT terraform:  + delete_on_termination = false 2025-10-08 14:54:57.227881 | orchestrator | 14:54:57.227 STDOUT terraform:  + destination_type = "volume" 2025-10-08 14:54:57.227886 | orchestrator | 14:54:57.227 STDOUT terraform:  + multiattach = false 2025-10-08 14:54:57.227940 | orchestrator | 14:54:57.227 STDOUT terraform:  + source_type = "volume" 2025-10-08 14:54:57.227945 | orchestrator | 14:54:57.227 STDOUT terraform:  + uuid = (known after apply) 2025-10-08 14:54:57.227950 | orchestrator | 14:54:57.227 STDOUT terraform:  } 2025-10-08 14:54:57.227956 | orchestrator | 14:54:57.227 STDOUT terraform:  + network { 2025-10-08 14:54:57.227999 | orchestrator | 14:54:57.227 STDOUT terraform:  + access_network = false 2025-10-08 14:54:57.228006 | orchestrator | 14:54:57.227 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-10-08 14:54:57.228049 | orchestrator | 14:54:57.227 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-10-08 14:54:57.228059 | orchestrator | 14:54:57.228 STDOUT terraform:  + mac = (known after apply) 2025-10-08 14:54:57.228106 | orchestrator | 14:54:57.228 STDOUT terraform:  + name = (known after apply) 2025-10-08 14:54:57.228112 | orchestrator | 14:54:57.228 STDOUT terraform:  + port = (known after apply) 2025-10-08 14:54:57.228152 | orchestrator | 14:54:57.228 STDOUT terraform:  + uuid = (known after apply) 2025-10-08 14:54:57.228158 | orchestrator | 14:54:57.228 STDOUT terraform:  } 2025-10-08 14:54:57.228162 | orchestrator | 14:54:57.228 STDOUT terraform:  } 2025-10-08 14:54:57.228240 | orchestrator | 14:54:57.228 STDOUT terraform:  # openstack_compute_keypair_v2.key will be created 2025-10-08 14:54:57.228246 | orchestrator | 14:54:57.228 STDOUT terraform:  + resource "openstack_compute_keypair_v2" "key" { 2025-10-08 14:54:57.228251 | orchestrator | 14:54:57.228 STDOUT terraform:  + fingerprint = (known after apply) 2025-10-08 14:54:57.228336 | orchestrator | 14:54:57.228 STDOUT terraform:  + id = (known after apply) 2025-10-08 14:54:57.228341 | orchestrator | 14:54:57.228 STDOUT terraform:  + name = "testbed" 2025-10-08 14:54:57.228345 | orchestrator | 14:54:57.228 STDOUT terraform:  + private_key = (sensitive value) 2025-10-08 14:54:57.228349 | orchestrator | 14:54:57.228 STDOUT terraform:  + public_key = (known after apply) 2025-10-08 14:54:57.228354 | orchestrator | 14:54:57.228 STDOUT terraform:  + region = (known after apply) 2025-10-08 14:54:57.228413 | orchestrator | 14:54:57.228 STDOUT terraform:  + user_id = (known after apply) 2025-10-08 14:54:57.228418 | orchestrator | 14:54:57.228 STDOUT terraform:  } 2025-10-08 14:54:57.228494 | orchestrator | 14:54:57.228 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2025-10-08 14:54:57.228523 | orchestrator | 14:54:57.228 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-10-08 14:54:57.228580 | orchestrator | 14:54:57.228 STDOUT terraform:  + device = (known after apply) 2025-10-08 14:54:57.228587 | orchestrator | 14:54:57.228 STDOUT terraform:  + id = (known after apply) 2025-10-08 14:54:57.228620 | orchestrator | 14:54:57.228 STDOUT terraform:  + instance_id = (known after apply) 2025-10-08 14:54:57.228686 | orchestrator | 14:54:57.228 STDOUT terraform:  + region = (known after apply) 2025-10-08 14:54:57.228691 | orchestrator | 14:54:57.228 STDOUT terraform:  + volume_id = (known after apply) 2025-10-08 14:54:57.228695 | orchestrator | 14:54:57.228 STDOUT terraform:  } 2025-10-08 14:54:57.228750 | orchestrator | 14:54:57.228 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2025-10-08 14:54:57.228808 | orchestrator | 14:54:57.228 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-10-08 14:54:57.228813 | orchestrator | 14:54:57.228 STDOUT terraform:  + device = (known after apply) 2025-10-08 14:54:57.228819 | orchestrator | 14:54:57.228 STDOUT terraform:  + id = (known after apply) 2025-10-08 14:54:57.228870 | orchestrator | 14:54:57.228 STDOUT terraform:  + instance_id = (known after apply) 2025-10-08 14:54:57.228878 | orchestrator | 14:54:57.228 STDOUT terraform:  + region = (known after apply) 2025-10-08 14:54:57.228887 | orchestrator | 14:54:57.228 STDOUT terraform:  + volume_id = (known after apply) 2025-10-08 14:54:57.228911 | orchestrator | 14:54:57.228 STDOUT terraform:  } 2025-10-08 14:54:57.228964 | orchestrator | 14:54:57.228 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2025-10-08 14:54:57.229034 | orchestrator | 14:54:57.228 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-10-08 14:54:57.229040 | orchestrator | 14:54:57.228 STDOUT terraform:  + device = (known after apply) 2025-10-08 14:54:57.229045 | orchestrator | 14:54:57.229 STDOUT terraform:  + id = (known after apply) 2025-10-08 14:54:57.229087 | orchestrator | 14:54:57.229 STDOUT terraform:  + instance_id = (known after apply) 2025-10-08 14:54:57.229093 | orchestrator | 14:54:57.229 STDOUT terraform:  + region = (known after apply) 2025-10-08 14:54:57.229134 | orchestrator | 14:54:57.229 STDOUT terraform:  + volume_id = (known after apply) 2025-10-08 14:54:57.229140 | orchestrator | 14:54:57.229 STDOUT terraform:  } 2025-10-08 14:54:57.229189 | orchestrator | 14:54:57.229 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2025-10-08 14:54:57.229277 | orchestrator | 14:54:57.229 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-10-08 14:54:57.229287 | orchestrator | 14:54:57.229 STDOUT terraform:  + device = (known after apply) 2025-10-08 14:54:57.229318 | orchestrator | 14:54:57.229 STDOUT terraform:  + id = (known after apply) 2025-10-08 14:54:57.229344 | orchestrator | 14:54:57.229 STDOUT terraform:  + instance_id = (known after apply) 2025-10-08 14:54:57.229384 | orchestrator | 14:54:57.229 STDOUT terraform:  + region = (known after apply) 2025-10-08 14:54:57.229391 | orchestrator | 14:54:57.229 STDOUT terraform:  + volume_id = (known after apply) 2025-10-08 14:54:57.229407 | orchestrator | 14:54:57.229 STDOUT terraform:  } 2025-10-08 14:54:57.229504 | orchestrator | 14:54:57.229 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2025-10-08 14:54:57.229509 | orchestrator | 14:54:57.229 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-10-08 14:54:57.229562 | orchestrator | 14:54:57.229 STDOUT terraform:  + device = (known after apply) 2025-10-08 14:54:57.229571 | orchestrator | 14:54:57.229 STDOUT terraform:  + id = (known after apply) 2025-10-08 14:54:57.229659 | orchestrator | 14:54:57.229 STDOUT terraform:  + instance_id = (known after apply) 2025-10-08 14:54:57.229667 | orchestrator | 14:54:57.229 STDOUT terraform:  + region = (known after apply) 2025-10-08 14:54:57.229670 | orchestrator | 14:54:57.229 STDOUT terraform:  + volume_id = (known after apply) 2025-10-08 14:54:57.229674 | orchestrator | 14:54:57.229 STDOUT terraform:  } 2025-10-08 14:54:57.229704 | orchestrator | 14:54:57.229 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2025-10-08 14:54:57.229782 | orchestrator | 14:54:57.229 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-10-08 14:54:57.229787 | orchestrator | 14:54:57.229 STDOUT terraform:  + device = (known after apply) 2025-10-08 14:54:57.229797 | orchestrator | 14:54:57.229 STDOUT terraform:  + id = (known after apply) 2025-10-08 14:54:57.229860 | orchestrator | 14:54:57.229 STDOUT terraform:  + instance_id = (known after apply) 2025-10-08 14:54:57.229869 | orchestrator | 14:54:57.229 STDOUT terraform:  + region = (known after apply) 2025-10-08 14:54:57.229874 | orchestrator | 14:54:57.229 STDOUT terraform:  + volume_id = (known after apply) 2025-10-08 14:54:57.229880 | orchestrator | 14:54:57.229 STDOUT terraform:  } 2025-10-08 14:54:57.229950 | orchestrator | 14:54:57.229 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2025-10-08 14:54:57.229988 | orchestrator | 14:54:57.229 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-10-08 14:54:57.230055 | orchestrator | 14:54:57.229 STDOUT terraform:  + device = (known after apply) 2025-10-08 14:54:57.238170 | orchestrator | 14:54:57.230 STDOUT terraform:  + id = (known after apply) 2025-10-08 14:54:57.238239 | orchestrator | 14:54:57.238 STDOUT terraform:  + instance_id = (known after apply) 2025-10-08 14:54:57.238244 | orchestrator | 14:54:57.238 STDOUT terraform:  + region = (known after apply) 2025-10-08 14:54:57.238294 | orchestrator | 14:54:57.238 STDOUT terraform:  + volume_id = (known after apply) 2025-10-08 14:54:57.238300 | orchestrator | 14:54:57.238 STDOUT terraform:  } 2025-10-08 14:54:57.238366 | orchestrator | 14:54:57.238 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2025-10-08 14:54:57.238373 | orchestrator | 14:54:57.238 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-10-08 14:54:57.238403 | orchestrator | 14:54:57.238 STDOUT terraform:  + device = (known after apply) 2025-10-08 14:54:57.238431 | orchestrator | 14:54:57.238 STDOUT terraform:  + id = (known after apply) 2025-10-08 14:54:57.238468 | orchestrator | 14:54:57.238 STDOUT terraform:  + instance_id = (known after apply) 2025-10-08 14:54:57.238475 | orchestrator | 14:54:57.238 STDOUT terraform:  + region = (known after apply) 2025-10-08 14:54:57.238505 | orchestrator | 14:54:57.238 STDOUT terraform:  + volume_id = (known after apply) 2025-10-08 14:54:57.238511 | orchestrator | 14:54:57.238 STDOUT terraform:  } 2025-10-08 14:54:57.238583 | orchestrator | 14:54:57.238 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2025-10-08 14:54:57.238627 | orchestrator | 14:54:57.238 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-10-08 14:54:57.238634 | orchestrator | 14:54:57.238 STDOUT terraform:  + device = (known after apply) 2025-10-08 14:54:57.238683 | orchestrator | 14:54:57.238 STDOUT terraform:  + id = (known after apply) 2025-10-08 14:54:57.238690 | orchestrator | 14:54:57.238 STDOUT terraform:  + instance_id = (known after apply) 2025-10-08 14:54:57.238719 | orchestrator | 14:54:57.238 STDOUT terraform:  + region = (known after apply) 2025-10-08 14:54:57.238766 | orchestrator | 14:54:57.238 STDOUT terraform:  + volume_id = (known after apply) 2025-10-08 14:54:57.238771 | orchestrator | 14:54:57.238 STDOUT terraform:  } 2025-10-08 14:54:57.238869 | orchestrator | 14:54:57.238 STDOUT terraform:  # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2025-10-08 14:54:57.238875 | orchestrator | 14:54:57.238 STDOUT terraform:  + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2025-10-08 14:54:57.238880 | orchestrator | 14:54:57.238 STDOUT terraform:  + fixed_ip = (known after apply) 2025-10-08 14:54:57.238911 | orchestrator | 14:54:57.238 STDOUT terraform:  + floating_ip = (known after apply) 2025-10-08 14:54:57.238939 | orchestrator | 14:54:57.238 STDOUT terraform:  + id = (known after apply) 2025-10-08 14:54:57.238990 | orchestrator | 14:54:57.238 STDOUT terraform:  + port_id = (known after apply) 2025-10-08 14:54:57.238996 | orchestrator | 14:54:57.238 STDOUT terraform:  + region = (known after apply) 2025-10-08 14:54:57.239001 | orchestrator | 14:54:57.238 STDOUT terraform:  } 2025-10-08 14:54:57.239045 | orchestrator | 14:54:57.238 STDOUT terraform:  # openstack_networking_floatingip_v2.manager_floating_ip will be created 2025-10-08 14:54:57.239092 | orchestrator | 14:54:57.239 STDOUT terraform:  + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2025-10-08 14:54:57.239123 | orchestrator | 14:54:57.239 STDOUT terraform:  + address = (known after apply) 2025-10-08 14:54:57.239129 | orchestrator | 14:54:57.239 STDOUT terraform:  + all_tags = (known after apply) 2025-10-08 14:54:57.239164 | orchestrator | 14:54:57.239 STDOUT terraform:  + dns_domain = (known after apply) 2025-10-08 14:54:57.239170 | orchestrator | 14:54:57.239 STDOUT terraform:  + dns_name = (known after apply) 2025-10-08 14:54:57.239200 | orchestrator | 14:54:57.239 STDOUT terraform:  + fixed_ip = (known after apply) 2025-10-08 14:54:57.239246 | orchestrator | 14:54:57.239 STDOUT terraform:  + id = (known after apply) 2025-10-08 14:54:57.239258 | orchestrator | 14:54:57.239 STDOUT terraform:  + pool = "public" 2025-10-08 14:54:57.239264 | orchestrator | 14:54:57.239 STDOUT terraform:  + port_id = (known after apply) 2025-10-08 14:54:57.239269 | orchestrator | 14:54:57.239 STDOUT terraform:  + region = (known after apply) 2025-10-08 14:54:57.239299 | orchestrator | 14:54:57.239 STDOUT terraform:  + subnet_id = (known after apply) 2025-10-08 14:54:57.239348 | orchestrator | 14:54:57.239 STDOUT terraform:  + tenant_id = (known after apply) 2025-10-08 14:54:57.239353 | orchestrator | 14:54:57.239 STDOUT terraform:  } 2025-10-08 14:54:57.239392 | orchestrator | 14:54:57.239 STDOUT terraform:  # openstack_networking_network_v2.net_management will be created 2025-10-08 14:54:57.239421 | orchestrator | 14:54:57.239 STDOUT terraform:  + resource "openstack_networking_network_v2" "net_management" { 2025-10-08 14:54:57.239476 | orchestrator | 14:54:57.239 STDOUT terraform:  + admin_state_up = (known after apply) 2025-10-08 14:54:57.239483 | orchestrator | 14:54:57.239 STDOUT terraform:  + all_tags = (known after apply) 2025-10-08 14:54:57.239536 | orchestrator | 14:54:57.239 STDOUT terraform:  + availability_zone_hints = [ 2025-10-08 14:54:57.239560 | orchestrator | 14:54:57.239 STDOUT terraform:  + "nova", 2025-10-08 14:54:57.239564 | orchestrator | 14:54:57.239 STDOUT terraform:  ] 2025-10-08 14:54:57.239570 | orchestrator | 14:54:57.239 STDOUT terraform:  + dns_domain = (known after apply) 2025-10-08 14:54:57.239646 | orchestrator | 14:54:57.239 STDOUT terraform:  + external = (known after apply) 2025-10-08 14:54:57.239654 | orchestrator | 14:54:57.239 STDOUT terraform:  + id = (known after apply) 2025-10-08 14:54:57.239659 | orchestrator | 14:54:57.239 STDOUT terraform:  + mtu = (known after apply) 2025-10-08 14:54:57.239720 | orchestrator | 14:54:57.239 STDOUT terraform:  + name = "net-testbed-management" 2025-10-08 14:54:57.239727 | orchestrator | 14:54:57.239 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-10-08 14:54:57.239771 | orchestrator | 14:54:57.239 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-10-08 14:54:57.239804 | orchestrator | 14:54:57.239 STDOUT terraform:  + region = (known after apply) 2025-10-08 14:54:57.246030 | orchestrator | 14:54:57.239 STDOUT terraform:  + shared = (known after apply) 2025-10-08 14:54:57.246040 | orchestrator | 14:54:57.239 STDOUT terraform:  + tenant_id = (known after apply) 2025-10-08 14:54:57.246044 | orchestrator | 14:54:57.239 STDOUT terraform:  + transparent_vlan = (known after apply) 2025-10-08 14:54:57.246047 | orchestrator | 14:54:57.239 STDOUT terraform:  + segments (known after apply) 2025-10-08 14:54:57.246051 | orchestrator | 14:54:57.239 STDOUT terraform:  } 2025-10-08 14:54:57.246055 | orchestrator | 14:54:57.239 STDOUT terraform:  # openstack_networking_port_v2.manager_port_management will be created 2025-10-08 14:54:57.246059 | orchestrator | 14:54:57.239 STDOUT terraform:  + resource "openstack_networking_port_v2" "manager_port_management" { 2025-10-08 14:54:57.246063 | orchestrator | 14:54:57.239 STDOUT terraform:  + admin_state_up = (known after apply) 2025-10-08 14:54:57.246066 | orchestrator | 14:54:57.240 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-10-08 14:54:57.246070 | orchestrator | 14:54:57.240 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-10-08 14:54:57.246077 | orchestrator | 14:54:57.240 STDOUT terraform:  + all_tags = (known after apply) 2025-10-08 14:54:57.246081 | orchestrator | 14:54:57.240 STDOUT terraform:  + device_id = (known after apply) 2025-10-08 14:54:57.246085 | orchestrator | 14:54:57.240 STDOUT terraform:  + device_owner = (known after apply) 2025-10-08 14:54:57.246089 | orchestrator | 14:54:57.240 STDOUT terraform:  + dns_assignment = (known after apply) 2025-10-08 14:54:57.246092 | orchestrator | 14:54:57.240 STDOUT terraform:  + dns_name = (known after apply) 2025-10-08 14:54:57.246096 | orchestrator | 14:54:57.240 STDOUT terraform:  + id = (known after apply) 2025-10-08 14:54:57.246100 | orchestrator | 14:54:57.240 STDOUT terraform:  + mac_address = (known after apply) 2025-10-08 14:54:57.246104 | orchestrator | 14:54:57.240 STDOUT terraform:  + network_id = (known after apply) 2025-10-08 14:54:57.246108 | orchestrator | 14:54:57.240 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-10-08 14:54:57.246112 | orchestrator | 14:54:57.240 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-10-08 14:54:57.246116 | orchestrator | 14:54:57.240 STDOUT terraform:  + region = (known after apply) 2025-10-08 14:54:57.246124 | orchestrator | 14:54:57.240 STDOUT terraform:  + security_group_ids = (known after apply) 2025-10-08 14:54:57.246128 | orchestrator | 14:54:57.240 STDOUT terraform:  + tenant_id = (known after apply) 2025-10-08 14:54:57.246132 | orchestrator | 14:54:57.240 STDOUT terraform:  + allowed_address_pairs { 2025-10-08 14:54:57.246136 | orchestrator | 14:54:57.240 STDOUT terraform:  + ip_address = "192.168.16.8/32" 2025-10-08 14:54:57.246140 | orchestrator | 14:54:57.240 STDOUT terraform:  } 2025-10-08 14:54:57.246144 | orchestrator | 14:54:57.240 STDOUT terraform:  + binding (known after apply) 2025-10-08 14:54:57.246148 | orchestrator | 14:54:57.240 STDOUT terraform:  + fixed_ip { 2025-10-08 14:54:57.246152 | orchestrator | 14:54:57.240 STDOUT terraform:  + ip_address = "192.168.16.5" 2025-10-08 14:54:57.246156 | orchestrator | 14:54:57.240 STDOUT terraform:  + subnet_id = (known after apply) 2025-10-08 14:54:57.246159 | orchestrator | 14:54:57.240 STDOUT terraform:  } 2025-10-08 14:54:57.246163 | orchestrator | 14:54:57.240 STDOUT terraform:  } 2025-10-08 14:54:57.246167 | orchestrator | 14:54:57.240 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[0] will be created 2025-10-08 14:54:57.246171 | orchestrator | 14:54:57.240 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-10-08 14:54:57.246175 | orchestrator | 14:54:57.240 STDOUT terraform:  + admin_state_up = (known after apply) 2025-10-08 14:54:57.246179 | orchestrator | 14:54:57.240 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-10-08 14:54:57.246187 | orchestrator | 14:54:57.240 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-10-08 14:54:57.246191 | orchestrator | 14:54:57.240 STDOUT terraform:  + all_tags = (known after apply) 2025-10-08 14:54:57.246194 | orchestrator | 14:54:57.240 STDOUT terraform:  + device_id = (known after apply) 2025-10-08 14:54:57.246198 | orchestrator | 14:54:57.240 STDOUT terraform:  + device_owner = (known after apply) 2025-10-08 14:54:57.246202 | orchestrator | 14:54:57.240 STDOUT terraform:  + dns_assignment = (known after apply) 2025-10-08 14:54:57.246206 | orchestrator | 14:54:57.240 STDOUT terraform:  + dns_name = (known after apply) 2025-10-08 14:54:57.246209 | orchestrator | 14:54:57.240 STDOUT terraform:  + id = (known after apply) 2025-10-08 14:54:57.246213 | orchestrator | 14:54:57.241 STDOUT terraform:  + mac_address = (known after apply) 2025-10-08 14:54:57.246217 | orchestrator | 14:54:57.241 STDOUT terraform:  + network_id = (known after apply) 2025-10-08 14:54:57.246221 | orchestrator | 14:54:57.241 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-10-08 14:54:57.246224 | orchestrator | 14:54:57.241 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-10-08 14:54:57.246228 | orchestrator | 14:54:57.241 STDOUT terraform:  + region = (known after apply) 2025-10-08 14:54:57.246232 | orchestrator | 14:54:57.241 STDOUT terraform:  + security_group_ids = (known after apply) 2025-10-08 14:54:57.246236 | orchestrator | 14:54:57.241 STDOUT terraform:  + tenant_id = (known after apply) 2025-10-08 14:54:57.246239 | orchestrator | 14:54:57.241 STDOUT terraform:  + allowed_address_pairs { 2025-10-08 14:54:57.246249 | orchestrator | 14:54:57.241 STDOUT terraform:  + ip_address = "192.168.16.254/32" 2025-10-08 14:54:57.246253 | orchestrator | 14:54:57.241 STDOUT terraform:  } 2025-10-08 14:54:57.246257 | orchestrator | 14:54:57.241 STDOUT terraform:  + allowed_address_pairs { 2025-10-08 14:54:57.246261 | orchestrator | 14:54:57.241 STDOUT terraform:  + ip_address = "192.168.16.8/32" 2025-10-08 14:54:57.246264 | orchestrator | 14:54:57.241 STDOUT terraform:  } 2025-10-08 14:54:57.246268 | orchestrator | 14:54:57.241 STDOUT terraform:  + allowed_address_pairs { 2025-10-08 14:54:57.246272 | orchestrator | 14:54:57.241 STDOUT terraform:  + ip_address = "192.168.16.9/32" 2025-10-08 14:54:57.246276 | orchestrator | 14:54:57.241 STDOUT terraform:  } 2025-10-08 14:54:57.246279 | orchestrator | 14:54:57.241 STDOUT terraform:  + binding (known after apply) 2025-10-08 14:54:57.246283 | orchestrator | 14:54:57.241 STDOUT terraform:  + fixed_ip { 2025-10-08 14:54:57.246287 | orchestrator | 14:54:57.241 STDOUT terraform:  + ip_address = "192.168.16.10" 2025-10-08 14:54:57.246291 | orchestrator | 14:54:57.241 STDOUT terraform:  + subnet_id = (known after apply) 2025-10-08 14:54:57.246294 | orchestrator | 14:54:57.241 STDOUT terraform:  } 2025-10-08 14:54:57.246298 | orchestrator | 14:54:57.241 STDOUT terraform:  } 2025-10-08 14:54:57.246302 | orchestrator | 14:54:57.241 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[1] will be created 2025-10-08 14:54:57.246306 | orchestrator | 14:54:57.241 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-10-08 14:54:57.246310 | orchestrator | 14:54:57.241 STDOUT terraform:  + admin_state_up = (known after apply) 2025-10-08 14:54:57.246313 | orchestrator | 14:54:57.241 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-10-08 14:54:57.246317 | orchestrator | 14:54:57.241 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-10-08 14:54:57.246321 | orchestrator | 14:54:57.241 STDOUT terraform:  + all_tags = (known after apply) 2025-10-08 14:54:57.246327 | orchestrator | 14:54:57.241 STDOUT terraform:  + device_id = (known after apply) 2025-10-08 14:54:57.246331 | orchestrator | 14:54:57.241 STDOUT terraform:  + device_owner = (known after apply) 2025-10-08 14:54:57.246335 | orchestrator | 14:54:57.241 STDOUT terraform:  + dns_assignment = (known after apply) 2025-10-08 14:54:57.246341 | orchestrator | 14:54:57.241 STDOUT terraform:  + dns_name = (known after apply) 2025-10-08 14:54:57.246345 | orchestrator | 14:54:57.241 STDOUT terraform:  + id = (known after apply) 2025-10-08 14:54:57.246349 | orchestrator | 14:54:57.241 STDOUT terraform:  + mac_address = (known after apply) 2025-10-08 14:54:57.246353 | orchestrator | 14:54:57.241 STDOUT terraform:  + network_id = (known after apply) 2025-10-08 14:54:57.246357 | orchestrator | 14:54:57.241 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-10-08 14:54:57.246360 | orchestrator | 14:54:57.241 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-10-08 14:54:57.246364 | orchestrator | 14:54:57.241 STDOUT terraform:  + region = (known after apply) 2025-10-08 14:54:57.246371 | orchestrator | 14:54:57.242 STDOUT terraform:  + security_group_ids = (known after apply) 2025-10-08 14:54:57.246375 | orchestrator | 14:54:57.242 STDOUT terraform:  + tenant_id = (known after apply) 2025-10-08 14:54:57.246379 | orchestrator | 14:54:57.242 STDOUT terraform:  + allowed_address_pairs { 2025-10-08 14:54:57.246382 | orchestrator | 14:54:57.242 STDOUT terraform:  + ip_address = "192.168.16.254/32" 2025-10-08 14:54:57.246386 | orchestrator | 14:54:57.242 STDOUT terraform:  } 2025-10-08 14:54:57.246392 | orchestrator | 14:54:57.242 STDOUT terraform:  + allowed_address_pairs { 2025-10-08 14:54:57.246396 | orchestrator | 14:54:57.242 STDOUT terraform:  + ip_address = "192.168.16.8/32" 2025-10-08 14:54:57.246400 | orchestrator | 14:54:57.242 STDOUT terraform:  } 2025-10-08 14:54:57.246404 | orchestrator | 14:54:57.242 STDOUT terraform:  + allowed_address_pairs { 2025-10-08 14:54:57.246407 | orchestrator | 14:54:57.242 STDOUT terraform:  + ip_address = "192.168.16.9/32" 2025-10-08 14:54:57.246411 | orchestrator | 14:54:57.242 STDOUT terraform:  } 2025-10-08 14:54:57.246415 | orchestrator | 14:54:57.242 STDOUT terraform:  + binding (known after apply) 2025-10-08 14:54:57.246419 | orchestrator | 14:54:57.242 STDOUT terraform:  + fixed_ip { 2025-10-08 14:54:57.246422 | orchestrator | 14:54:57.242 STDOUT terraform:  + ip_address = "192.168.16.11" 2025-10-08 14:54:57.246426 | orchestrator | 14:54:57.242 STDOUT terraform:  + subnet_id = (known after apply) 2025-10-08 14:54:57.246430 | orchestrator | 14:54:57.242 STDOUT terraform:  } 2025-10-08 14:54:57.246434 | orchestrator | 14:54:57.242 STDOUT terraform:  } 2025-10-08 14:54:57.246438 | orchestrator | 14:54:57.242 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[2] will be created 2025-10-08 14:54:57.246442 | orchestrator | 14:54:57.242 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-10-08 14:54:57.246446 | orchestrator | 14:54:57.242 STDOUT terraform:  + admin_state_up = (known after apply) 2025-10-08 14:54:57.246449 | orchestrator | 14:54:57.242 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-10-08 14:54:57.246453 | orchestrator | 14:54:57.242 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-10-08 14:54:57.246457 | orchestrator | 14:54:57.242 STDOUT terraform:  + all_tags = (known after apply) 2025-10-08 14:54:57.246461 | orchestrator | 14:54:57.242 STDOUT terraform:  + device_id = (known after apply) 2025-10-08 14:54:57.246465 | orchestrator | 14:54:57.242 STDOUT terraform:  + device_owner = (known after apply) 2025-10-08 14:54:57.246468 | orchestrator | 14:54:57.242 STDOUT terraform:  + dns_assignment = (known after apply) 2025-10-08 14:54:57.246472 | orchestrator | 14:54:57.242 STDOUT terraform:  + dns_name = (known after apply) 2025-10-08 14:54:57.246476 | orchestrator | 14:54:57.242 STDOUT terraform:  + id = (known after apply) 2025-10-08 14:54:57.246480 | orchestrator | 14:54:57.242 STDOUT terraform:  + mac_address = (known after apply) 2025-10-08 14:54:57.246484 | orchestrator | 14:54:57.242 STDOUT terraform:  + network_id = (known after apply) 2025-10-08 14:54:57.246491 | orchestrator | 14:54:57.242 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-10-08 14:54:57.246497 | orchestrator | 14:54:57.242 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-10-08 14:54:57.246501 | orchestrator | 14:54:57.242 STDOUT terraform:  + region = (known after apply) 2025-10-08 14:54:57.246505 | orchestrator | 14:54:57.242 STDOUT terraform:  + security_group_ids = (known after apply) 2025-10-08 14:54:57.246508 | orchestrator | 14:54:57.242 STDOUT terraform:  + tenant_id = (known after apply) 2025-10-08 14:54:57.246512 | orchestrator | 14:54:57.242 STDOUT terraform:  + allowed_address_pairs { 2025-10-08 14:54:57.246516 | orchestrator | 14:54:57.242 STDOUT terraform:  + ip_address = "192.168.16.254/32" 2025-10-08 14:54:57.246520 | orchestrator | 14:54:57.242 STDOUT terraform:  } 2025-10-08 14:54:57.246524 | orchestrator | 14:54:57.242 STDOUT terraform:  + allowed_address_pairs { 2025-10-08 14:54:57.246527 | orchestrator | 14:54:57.242 STDOUT terraform:  + ip_address = "192.168.16.8/32" 2025-10-08 14:54:57.246531 | orchestrator | 14:54:57.243 STDOUT terraform:  } 2025-10-08 14:54:57.246535 | orchestrator | 14:54:57.243 STDOUT terraform:  + allowed_address_pairs { 2025-10-08 14:54:57.246551 | orchestrator | 14:54:57.243 STDOUT terraform:  + ip_address = "192.168.16.9/32" 2025-10-08 14:54:57.246555 | orchestrator | 14:54:57.243 STDOUT terraform:  } 2025-10-08 14:54:57.246561 | orchestrator | 14:54:57.243 STDOUT terraform:  + binding (known after apply) 2025-10-08 14:54:57.246565 | orchestrator | 14:54:57.243 STDOUT terraform:  + fixed_ip { 2025-10-08 14:54:57.246569 | orchestrator | 14:54:57.243 STDOUT terraform:  + ip_address = "192.168.16.12" 2025-10-08 14:54:57.246572 | orchestrator | 14:54:57.243 STDOUT terraform:  + subnet_id = (known after apply) 2025-10-08 14:54:57.246576 | orchestrator | 14:54:57.243 STDOUT terraform:  } 2025-10-08 14:54:57.246580 | orchestrator | 14:54:57.243 STDOUT terraform:  } 2025-10-08 14:54:57.246584 | orchestrator | 14:54:57.243 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[3] will be created 2025-10-08 14:54:57.246588 | orchestrator | 14:54:57.243 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-10-08 14:54:57.246591 | orchestrator | 14:54:57.243 STDOUT terraform:  + admin_state_up = (known after apply) 2025-10-08 14:54:57.246595 | orchestrator | 14:54:57.243 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-10-08 14:54:57.246599 | orchestrator | 14:54:57.243 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-10-08 14:54:57.246603 | orchestrator | 14:54:57.243 STDOUT terraform:  + all_tags = (known after apply) 2025-10-08 14:54:57.246606 | orchestrator | 14:54:57.243 STDOUT terraform:  + device_id = (known after apply) 2025-10-08 14:54:57.246610 | orchestrator | 14:54:57.243 STDOUT terraform:  + device_owner = (known after apply) 2025-10-08 14:54:57.246614 | orchestrator | 14:54:57.243 STDOUT terraform:  + dns_assignment = (known after apply) 2025-10-08 14:54:57.246618 | orchestrator | 14:54:57.243 STDOUT terraform:  + dns_name = (known after apply) 2025-10-08 14:54:57.246622 | orchestrator | 14:54:57.243 STDOUT terraform:  + id = (known after apply) 2025-10-08 14:54:57.246628 | orchestrator | 14:54:57.243 STDOUT terraform:  + mac_address = (known after apply) 2025-10-08 14:54:57.246632 | orchestrator | 14:54:57.243 STDOUT terraform:  + network_id = (known after apply) 2025-10-08 14:54:57.246636 | orchestrator | 14:54:57.243 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-10-08 14:54:57.246640 | orchestrator | 14:54:57.243 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-10-08 14:54:57.246643 | orchestrator | 14:54:57.243 STDOUT terraform:  + region = (known after apply) 2025-10-08 14:54:57.246647 | orchestrator | 14:54:57.243 STDOUT terraform:  + security_group_ids = (known after apply) 2025-10-08 14:54:57.246651 | orchestrator | 14:54:57.243 STDOUT terraform:  + tenant_id = (known after apply) 2025-10-08 14:54:57.246655 | orchestrator | 14:54:57.243 STDOUT terraform:  + allowed_address_pairs { 2025-10-08 14:54:57.246659 | orchestrator | 14:54:57.243 STDOUT terraform:  + ip_address = "192.168.16.254/32" 2025-10-08 14:54:57.246665 | orchestrator | 14:54:57.243 STDOUT terraform:  } 2025-10-08 14:54:57.246669 | orchestrator | 14:54:57.243 STDOUT terraform:  + allowed_address_pairs { 2025-10-08 14:54:57.246673 | orchestrator | 14:54:57.243 STDOUT terraform:  + ip_address = "192.168.16.8/32" 2025-10-08 14:54:57.246676 | orchestrator | 14:54:57.243 STDOUT terraform:  } 2025-10-08 14:54:57.246680 | orchestrator | 14:54:57.243 STDOUT terraform:  + allowed_address_pairs { 2025-10-08 14:54:57.246684 | orchestrator | 14:54:57.243 STDOUT terraform:  + ip_address = "192.168.16.9/32" 2025-10-08 14:54:57.246688 | orchestrator | 14:54:57.243 STDOUT terraform:  } 2025-10-08 14:54:57.246692 | orchestrator | 14:54:57.243 STDOUT terraform:  + binding (known after apply) 2025-10-08 14:54:57.246696 | orchestrator | 14:54:57.243 STDOUT terraform:  + fixed_ip { 2025-10-08 14:54:57.246699 | orchestrator | 14:54:57.243 STDOUT terraform:  + ip_address = "192.168.16.13" 2025-10-08 14:54:57.246703 | orchestrator | 14:54:57.243 STDOUT terraform:  + subnet_id = (known after apply) 2025-10-08 14:54:57.246707 | orchestrator | 14:54:57.243 STDOUT terraform:  } 2025-10-08 14:54:57.246711 | orchestrator | 14:54:57.243 STDOUT terraform:  } 2025-10-08 14:54:57.246715 | orchestrator | 14:54:57.243 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[4] will be created 2025-10-08 14:54:57.246718 | orchestrator | 14:54:57.243 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-10-08 14:54:57.246722 | orchestrator | 14:54:57.244 STDOUT terraform:  + admin_state_up = (known after apply) 2025-10-08 14:54:57.246726 | orchestrator | 14:54:57.244 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-10-08 14:54:57.246730 | orchestrator | 14:54:57.244 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-10-08 14:54:57.246733 | orchestrator | 14:54:57.244 STDOUT terraform:  + all_tags = (known after apply) 2025-10-08 14:54:57.246737 | orchestrator | 14:54:57.244 STDOUT terraform:  + device_id = (known after apply) 2025-10-08 14:54:57.246741 | orchestrator | 14:54:57.244 STDOUT terraform:  + device_owner = (known after apply) 2025-10-08 14:54:57.246748 | orchestrator | 14:54:57.244 STDOUT terraform:  + dns_assignment = (known after apply) 2025-10-08 14:54:57.246752 | orchestrator | 14:54:57.244 STDOUT terraform:  + dns_name = (known after apply) 2025-10-08 14:54:57.246756 | orchestrator | 14:54:57.244 STDOUT terraform:  + id = (known after apply) 2025-10-08 14:54:57.246760 | orchestrator | 14:54:57.244 STDOUT terraform:  + mac_address = (known after apply) 2025-10-08 14:54:57.246765 | orchestrator | 14:54:57.244 STDOUT terraform:  + network_id = (known after apply) 2025-10-08 14:54:57.246769 | orchestrator | 14:54:57.244 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-10-08 14:54:57.246773 | orchestrator | 14:54:57.244 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-10-08 14:54:57.246777 | orchestrator | 14:54:57.244 STDOUT terraform:  + region = (known after apply) 2025-10-08 14:54:57.246781 | orchestrator | 14:54:57.244 STDOUT terraform:  + security_group_ids = (known after apply) 2025-10-08 14:54:57.246784 | orchestrator | 14:54:57.244 STDOUT terraform:  + tenant_id = (known after apply) 2025-10-08 14:54:57.246788 | orchestrator | 14:54:57.244 STDOUT terraform:  + allowed_address_pairs { 2025-10-08 14:54:57.246792 | orchestrator | 14:54:57.244 STDOUT terraform:  + ip_address = "192.168.16.254/32" 2025-10-08 14:54:57.246796 | orchestrator | 14:54:57.244 STDOUT terraform:  } 2025-10-08 14:54:57.246800 | orchestrator | 14:54:57.244 STDOUT terraform:  + allowed_address_pairs { 2025-10-08 14:54:57.246804 | orchestrator | 14:54:57.244 STDOUT terraform:  + ip_address = "192.168.16.8/32" 2025-10-08 14:54:57.246807 | orchestrator | 14:54:57.244 STDOUT terraform:  } 2025-10-08 14:54:57.246811 | orchestrator | 14:54:57.244 STDOUT terraform:  + allowed_address_pairs { 2025-10-08 14:54:57.246815 | orchestrator | 14:54:57.244 STDOUT terraform:  + ip_address = "192.168.16.9/32" 2025-10-08 14:54:57.246821 | orchestrator | 14:54:57.244 STDOUT terraform:  } 2025-10-08 14:54:57.246825 | orchestrator | 14:54:57.244 STDOUT terraform:  + binding (known after apply) 2025-10-08 14:54:57.246829 | orchestrator | 14:54:57.244 STDOUT terraform:  + fixed_ip { 2025-10-08 14:54:57.246833 | orchestrator | 14:54:57.244 STDOUT terraform:  + ip_address = "192.168.16.14" 2025-10-08 14:54:57.246837 | orchestrator | 14:54:57.244 STDOUT terraform:  + subnet_id = (known after apply) 2025-10-08 14:54:57.246841 | orchestrator | 14:54:57.244 STDOUT terraform:  } 2025-10-08 14:54:57.246844 | orchestrator | 14:54:57.244 STDOUT terraform:  } 2025-10-08 14:54:57.246848 | orchestrator | 14:54:57.244 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[5] will be created 2025-10-08 14:54:57.246852 | orchestrator | 14:54:57.244 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-10-08 14:54:57.246856 | orchestrator | 14:54:57.244 STDOUT terraform:  + admin_state_up = (known after apply) 2025-10-08 14:54:57.246860 | orchestrator | 14:54:57.244 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-10-08 14:54:57.246863 | orchestrator | 14:54:57.244 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-10-08 14:54:57.246878 | orchestrator | 14:54:57.244 STDOUT terraform:  + all_tags = (known after apply) 2025-10-08 14:54:57.246886 | orchestrator | 14:54:57.244 STDOUT terraform:  + device_id = (known after apply) 2025-10-08 14:54:57.246889 | orchestrator | 14:54:57.244 STDOUT terraform:  + device_owner = (known after apply) 2025-10-08 14:54:57.246893 | orchestrator | 14:54:57.245 STDOUT terraform:  + dns_assignment = (known after apply) 2025-10-08 14:54:57.246897 | orchestrator | 14:54:57.245 STDOUT terraform:  + dns_name = (known after apply) 2025-10-08 14:54:57.246901 | orchestrator | 14:54:57.245 STDOUT terraform:  + id = (known after apply) 2025-10-08 14:54:57.246904 | orchestrator | 14:54:57.245 STDOUT terraform:  + mac_address = (known after apply) 2025-10-08 14:54:57.246908 | orchestrator | 14:54:57.245 STDOUT terraform:  + network_id = (known after apply) 2025-10-08 14:54:57.246912 | orchestrator | 14:54:57.245 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-10-08 14:54:57.246916 | orchestrator | 14:54:57.245 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-10-08 14:54:57.246919 | orchestrator | 14:54:57.245 STDOUT terraform:  + region = (known after apply) 2025-10-08 14:54:57.246923 | orchestrator | 14:54:57.245 STDOUT terraform:  + security_group_ids = (known after apply) 2025-10-08 14:54:57.246927 | orchestrator | 14:54:57.245 STDOUT terraform:  + tenant_id = (known after apply) 2025-10-08 14:54:57.246931 | orchestrator | 14:54:57.245 STDOUT terraform:  + allowed_address_pairs { 2025-10-08 14:54:57.246935 | orchestrator | 14:54:57.245 STDOUT terraform:  + ip_address = "192.168.16.254/32" 2025-10-08 14:54:57.246938 | orchestrator | 14:54:57.245 STDOUT terraform:  } 2025-10-08 14:54:57.246942 | orchestrator | 14:54:57.245 STDOUT terraform:  + allowed_address_pairs { 2025-10-08 14:54:57.246946 | orchestrator | 14:54:57.245 STDOUT terraform:  + ip_address = "192.168.16.8/32" 2025-10-08 14:54:57.246950 | orchestrator | 14:54:57.245 STDOUT terraform:  } 2025-10-08 14:54:57.246954 | orchestrator | 14:54:57.245 STDOUT terraform:  + allowed_address_pairs { 2025-10-08 14:54:57.246957 | orchestrator | 14:54:57.245 STDOUT terraform:  + ip_address = "192.168.16.9/32" 2025-10-08 14:54:57.246961 | orchestrator | 14:54:57.245 STDOUT terraform:  } 2025-10-08 14:54:57.246965 | orchestrator | 14:54:57.245 STDOUT terraform:  + binding (known after apply) 2025-10-08 14:54:57.246969 | orchestrator | 14:54:57.245 STDOUT terraform:  + fixed_ip { 2025-10-08 14:54:57.246973 | orchestrator | 14:54:57.245 STDOUT terraform:  + ip_address = "192.168.16.15" 2025-10-08 14:54:57.246976 | orchestrator | 14:54:57.245 STDOUT terraform:  + subnet_id = (known after apply) 2025-10-08 14:54:57.246980 | orchestrator | 14:54:57.245 STDOUT terraform:  } 2025-10-08 14:54:57.246987 | orchestrator | 14:54:57.245 STDOUT terraform:  } 2025-10-08 14:54:57.246991 | orchestrator | 14:54:57.245 STDOUT terraform:  # openstack_networking_router_interface_v2.router_interface will be created 2025-10-08 14:54:57.246995 | orchestrator | 14:54:57.245 STDOUT terraform:  + resource "openstack_networking_router_interface_v2" "router_interface" { 2025-10-08 14:54:57.246999 | orchestrator | 14:54:57.245 STDOUT terraform:  + force_destroy = false 2025-10-08 14:54:57.247005 | orchestrator | 14:54:57.245 STDOUT terraform:  + id = (known after apply) 2025-10-08 14:54:57.247009 | orchestrator | 14:54:57.245 STDOUT terraform:  + port_id = (known after apply) 2025-10-08 14:54:57.247013 | orchestrator | 14:54:57.245 STDOUT terraform:  + region = (known after apply) 2025-10-08 14:54:57.247017 | orchestrator | 14:54:57.245 STDOUT terraform:  + router_id = (known after apply) 2025-10-08 14:54:57.247020 | orchestrator | 14:54:57.245 STDOUT terraform:  + subnet_id = (known after apply) 2025-10-08 14:54:57.247024 | orchestrator | 14:54:57.245 STDOUT terraform:  } 2025-10-08 14:54:57.247028 | orchestrator | 14:54:57.245 STDOUT terraform:  # openstack_networking_router_v2.router will be created 2025-10-08 14:54:57.247034 | orchestrator | 14:54:57.245 STDOUT terraform:  + resource "openstack_networking_router_v2" "router" { 2025-10-08 14:54:57.247038 | orchestrator | 14:54:57.245 STDOUT terraform:  + admin_state_up = (known after apply) 2025-10-08 14:54:57.247042 | orchestrator | 14:54:57.245 STDOUT terraform:  + all_tags = (known after apply) 2025-10-08 14:54:57.247046 | orchestrator | 14:54:57.245 STDOUT terraform:  + availability_zone_hints = [ 2025-10-08 14:54:57.247049 | orchestrator | 14:54:57.246 STDOUT terraform:  + "nova", 2025-10-08 14:54:57.247053 | orchestrator | 14:54:57.246 STDOUT terraform:  ] 2025-10-08 14:54:57.247057 | orchestrator | 14:54:57.246 STDOUT terraform:  + distributed = (known after apply) 2025-10-08 14:54:57.247061 | orchestrator | 14:54:57.246 STDOUT terraform:  + enable_snat = (known after apply) 2025-10-08 14:54:57.247064 | orchestrator | 14:54:57.246 STDOUT terraform:  + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2025-10-08 14:54:57.247068 | orchestrator | 14:54:57.246 STDOUT terraform:  + external_qos_policy_id = (known after apply) 2025-10-08 14:54:57.247072 | orchestrator | 14:54:57.246 STDOUT terraform:  + id = (known after apply) 2025-10-08 14:54:57.247076 | orchestrator | 14:54:57.246 STDOUT terraform:  + name = "testbed" 2025-10-08 14:54:57.247080 | orchestrator | 14:54:57.246 STDOUT terraform:  + region = (known after apply) 2025-10-08 14:54:57.247083 | orchestrator | 14:54:57.246 STDOUT terraform:  + tenant_id = (known after apply) 2025-10-08 14:54:57.247087 | orchestrator | 14:54:57.246 STDOUT terraform:  + external_fixed_ip (known after apply) 2025-10-08 14:54:57.247091 | orchestrator | 14:54:57.246 STDOUT terraform:  } 2025-10-08 14:54:57.247095 | orchestrator | 14:54:57.246 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2025-10-08 14:54:57.247099 | orchestrator | 14:54:57.246 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2025-10-08 14:54:57.247102 | orchestrator | 14:54:57.246 STDOUT terraform:  + description = "ssh" 2025-10-08 14:54:57.247106 | orchestrator | 14:54:57.246 STDOUT terraform:  + direction = "ingress" 2025-10-08 14:54:57.247110 | orchestrator | 14:54:57.246 STDOUT terraform:  + ethertype = "IPv4" 2025-10-08 14:54:57.247114 | orchestrator | 14:54:57.246 STDOUT terraform:  + id = (known after apply) 2025-10-08 14:54:57.247120 | orchestrator | 14:54:57.246 STDOUT terraform:  + port_range_max = 22 2025-10-08 14:54:57.247124 | orchestrator | 14:54:57.246 STDOUT terraform:  + port_range_min = 22 2025-10-08 14:54:57.247127 | orchestrator | 14:54:57.246 STDOUT terraform:  + protocol = "tcp" 2025-10-08 14:54:57.247131 | orchestrator | 14:54:57.246 STDOUT terraform:  + region = (known after apply) 2025-10-08 14:54:57.247137 | orchestrator | 14:54:57.246 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-10-08 14:54:57.247141 | orchestrator | 14:54:57.246 STDOUT terraform:  + remote_group_id = (known after apply) 2025-10-08 14:54:57.247145 | orchestrator | 14:54:57.246 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-10-08 14:54:57.247149 | orchestrator | 14:54:57.246 STDOUT terraform:  + security_group_id = (known after apply) 2025-10-08 14:54:57.247152 | orchestrator | 14:54:57.246 STDOUT terraform:  + tenant_id = (known after apply) 2025-10-08 14:54:57.247157 | orchestrator | 14:54:57.246 STDOUT terraform:  } 2025-10-08 14:54:57.247161 | orchestrator | 14:54:57.246 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2025-10-08 14:54:57.247165 | orchestrator | 14:54:57.246 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2025-10-08 14:54:57.247168 | orchestrator | 14:54:57.246 STDOUT terraform:  + description = "wireguard" 2025-10-08 14:54:57.247172 | orchestrator | 14:54:57.246 STDOUT terraform:  + direction = "ingress" 2025-10-08 14:54:57.247176 | orchestrator | 14:54:57.246 STDOUT terraform:  + ethertype = "IPv4" 2025-10-08 14:54:57.247180 | orchestrator | 14:54:57.246 STDOUT terraform:  + id = (known after apply) 2025-10-08 14:54:57.247183 | orchestrator | 14:54:57.246 STDOUT terraform:  + port_range_ 2025-10-08 14:54:57.247187 | orchestrator | 14:54:57.247 STDOUT terraform: max = 51820 2025-10-08 14:54:57.247191 | orchestrator | 14:54:57.247 STDOUT terraform:  + port_range_min = 51820 2025-10-08 14:54:57.247195 | orchestrator | 14:54:57.247 STDOUT terraform:  + protocol = "udp" 2025-10-08 14:54:57.247198 | orchestrator | 14:54:57.247 STDOUT terraform:  + region = (known after apply) 2025-10-08 14:54:57.247204 | orchestrator | 14:54:57.247 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-10-08 14:54:57.247207 | orchestrator | 14:54:57.247 STDOUT terraform:  + remote_group_id = (known after apply) 2025-10-08 14:54:57.247212 | orchestrator | 14:54:57.247 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-10-08 14:54:57.247254 | orchestrator | 14:54:57.247 STDOUT terraform:  + security_group_id = (known after apply) 2025-10-08 14:54:57.247316 | orchestrator | 14:54:57.247 STDOUT terraform:  + tenant_id = (known after apply) 2025-10-08 14:54:57.247322 | orchestrator | 14:54:57.247 STDOUT terraform:  } 2025-10-08 14:54:57.247350 | orchestrator | 14:54:57.247 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2025-10-08 14:54:57.247417 | orchestrator | 14:54:57.247 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2025-10-08 14:54:57.247430 | orchestrator | 14:54:57.247 STDOUT terraform:  + direction = "ingress" 2025-10-08 14:54:57.247453 | orchestrator | 14:54:57.247 STDOUT terraform:  + ethertype = "IPv4" 2025-10-08 14:54:57.247491 | orchestrator | 14:54:57.247 STDOUT terraform:  + id = (known after apply) 2025-10-08 14:54:57.247531 | orchestrator | 14:54:57.247 STDOUT terraform:  + protocol = "tcp" 2025-10-08 14:54:57.247557 | orchestrator | 14:54:57.247 STDOUT terraform:  + region = (known after apply) 2025-10-08 14:54:57.247608 | orchestrator | 14:54:57.247 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-10-08 14:54:57.247614 | orchestrator | 14:54:57.247 STDOUT terraform:  + remote_group_id = (known after apply) 2025-10-08 14:54:57.247647 | orchestrator | 14:54:57.247 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-10-08 14:54:57.247699 | orchestrator | 14:54:57.247 STDOUT terraform:  + security_group_id = (known after apply) 2025-10-08 14:54:57.247737 | orchestrator | 14:54:57.247 STDOUT terraform:  + tenant_id = (known after apply) 2025-10-08 14:54:57.247742 | orchestrator | 14:54:57.247 STDOUT terraform:  } 2025-10-08 14:54:57.247814 | orchestrator | 14:54:57.247 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2025-10-08 14:54:57.247821 | orchestrator | 14:54:57.247 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2025-10-08 14:54:57.247858 | orchestrator | 14:54:57.247 STDOUT terraform:  + direction = "ingress" 2025-10-08 14:54:57.247895 | orchestrator | 14:54:57.247 STDOUT terraform:  + ethertype = "IPv4" 2025-10-08 14:54:57.247924 | orchestrator | 14:54:57.247 STDOUT terraform:  + id = (known after apply) 2025-10-08 14:54:57.247932 | orchestrator | 14:54:57.247 STDOUT terraform:  + protocol = "udp" 2025-10-08 14:54:57.248049 | orchestrator | 14:54:57.247 STDOUT terraform:  + region = (known after apply) 2025-10-08 14:54:57.248057 | orchestrator | 14:54:57.247 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-10-08 14:54:57.248061 | orchestrator | 14:54:57.247 STDOUT terraform:  + remote_group_id = (known after apply) 2025-10-08 14:54:57.248066 | orchestrator | 14:54:57.248 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-10-08 14:54:57.248095 | orchestrator | 14:54:57.248 STDOUT terraform:  + security_group_id = (known after apply) 2025-10-08 14:54:57.248189 | orchestrator | 14:54:57.248 STDOUT terraform:  + tenant_id = (known after apply) 2025-10-08 14:54:57.248195 | orchestrator | 14:54:57.248 STDOUT terraform:  } 2025-10-08 14:54:57.248199 | orchestrator | 14:54:57.248 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2025-10-08 14:54:57.248243 | orchestrator | 14:54:57.248 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2025-10-08 14:54:57.248270 | orchestrator | 14:54:57.248 STDOUT terraform:  + direction = "ingress" 2025-10-08 14:54:57.248281 | orchestrator | 14:54:57.248 STDOUT terraform:  + ethertype = "IPv4" 2025-10-08 14:54:57.248344 | orchestrator | 14:54:57.248 STDOUT terraform:  + id = (known after apply) 2025-10-08 14:54:57.248354 | orchestrator | 14:54:57.248 STDOUT terraform:  + protocol = "icmp" 2025-10-08 14:54:57.248379 | orchestrator | 14:54:57.248 STDOUT terraform:  + region = (known after apply) 2025-10-08 14:54:57.248422 | orchestrator | 14:54:57.248 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-10-08 14:54:57.248449 | orchestrator | 14:54:57.248 STDOUT terraform:  + remote_group_id = (known after apply) 2025-10-08 14:54:57.248513 | orchestrator | 14:54:57.248 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-10-08 14:54:57.248521 | orchestrator | 14:54:57.248 STDOUT terraform:  + security_group_id = (known after apply) 2025-10-08 14:54:57.248560 | orchestrator | 14:54:57.248 STDOUT terraform:  + tenant_id = (known after apply) 2025-10-08 14:54:57.248572 | orchestrator | 14:54:57.248 STDOUT terraform:  } 2025-10-08 14:54:57.248626 | orchestrator | 14:54:57.248 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2025-10-08 14:54:57.248665 | orchestrator | 14:54:57.248 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2025-10-08 14:54:57.248705 | orchestrator | 14:54:57.248 STDOUT terraform:  + direction = "ingress" 2025-10-08 14:54:57.248712 | orchestrator | 14:54:57.248 STDOUT terraform:  + ethertype = "IPv4" 2025-10-08 14:54:57.248753 | orchestrator | 14:54:57.248 STDOUT terraform:  + id = (known after apply) 2025-10-08 14:54:57.248802 | orchestrator | 14:54:57.248 STDOUT terraform:  + protocol = "tcp" 2025-10-08 14:54:57.248808 | orchestrator | 14:54:57.248 STDOUT terraform:  + region = (known after apply) 2025-10-08 14:54:57.248849 | orchestrator | 14:54:57.248 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-10-08 14:54:57.248896 | orchestrator | 14:54:57.248 STDOUT terraform:  + remote_group_id = (known after apply) 2025-10-08 14:54:57.248902 | orchestrator | 14:54:57.248 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-10-08 14:54:57.248939 | orchestrator | 14:54:57.248 STDOUT terraform:  + security_group_id = (known after apply) 2025-10-08 14:54:57.248966 | orchestrator | 14:54:57.248 STDOUT terraform:  + tenant_id = (known after apply) 2025-10-08 14:54:57.248975 | orchestrator | 14:54:57.248 STDOUT terraform:  } 2025-10-08 14:54:57.249031 | orchestrator | 14:54:57.248 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2025-10-08 14:54:57.249084 | orchestrator | 14:54:57.249 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2025-10-08 14:54:57.249123 | orchestrator | 14:54:57.249 STDOUT terraform:  + direction = "ingress" 2025-10-08 14:54:57.249128 | orchestrator | 14:54:57.249 STDOUT terraform:  + ethertype = "IPv4" 2025-10-08 14:54:57.249190 | orchestrator | 14:54:57.249 STDOUT terraform:  + id = (known after apply) 2025-10-08 14:54:57.249195 | orchestrator | 14:54:57.249 STDOUT terraform:  + protocol = "udp" 2025-10-08 14:54:57.249233 | orchestrator | 14:54:57.249 STDOUT terraform:  + region = (known after apply) 2025-10-08 14:54:57.249247 | orchestrator | 14:54:57.249 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-10-08 14:54:57.249349 | orchestrator | 14:54:57.249 STDOUT terraform:  + remote_group_id = (known after apply) 2025-10-08 14:54:57.249355 | orchestrator | 14:54:57.249 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-10-08 14:54:57.249359 | orchestrator | 14:54:57.249 STDOUT terraform:  + security_group_id = (known after apply) 2025-10-08 14:54:57.249365 | orchestrator | 14:54:57.249 STDOUT terraform:  + tenant_id = (known after apply) 2025-10-08 14:54:57.249370 | orchestrator | 14:54:57.249 STDOUT terraform:  } 2025-10-08 14:54:57.249433 | orchestrator | 14:54:57.249 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2025-10-08 14:54:57.249500 | orchestrator | 14:54:57.249 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2025-10-08 14:54:57.249561 | orchestrator | 14:54:57.249 STDOUT terraform:  + direction = "ingress" 2025-10-08 14:54:57.249567 | orchestrator | 14:54:57.249 STDOUT terraform:  + ethertype = "IPv4" 2025-10-08 14:54:57.249651 | orchestrator | 14:54:57.249 STDOUT terraform:  + id = (known after apply) 2025-10-08 14:54:57.249657 | orchestrator | 14:54:57.249 STDOUT terraform:  + protocol = "icmp" 2025-10-08 14:54:57.249661 | orchestrator | 14:54:57.249 STDOUT terraform:  + region = (known after apply) 2025-10-08 14:54:57.249666 | orchestrator | 14:54:57.249 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-10-08 14:54:57.249709 | orchestrator | 14:54:57.249 STDOUT terraform:  + remote_group_id = (known after apply) 2025-10-08 14:54:57.249735 | orchestrator | 14:54:57.249 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-10-08 14:54:57.249768 | orchestrator | 14:54:57.249 STDOUT terraform:  + security_group_id = (known after apply) 2025-10-08 14:54:57.249807 | orchestrator | 14:54:57.249 STDOUT terraform:  + tenant_id = (known after apply) 2025-10-08 14:54:57.249815 | orchestrator | 14:54:57.249 STDOUT terraform:  } 2025-10-08 14:54:57.249878 | orchestrator | 14:54:57.249 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2025-10-08 14:54:57.249909 | orchestrator | 14:54:57.249 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2025-10-08 14:54:57.249937 | orchestrator | 14:54:57.249 STDOUT terraform:  + description = "vrrp" 2025-10-08 14:54:57.249994 | orchestrator | 14:54:57.249 STDOUT terraform:  + direction = "ingress" 2025-10-08 14:54:57.250000 | orchestrator | 14:54:57.249 STDOUT terraform:  + ethertype = "IPv4" 2025-10-08 14:54:57.250062 | orchestrator | 14:54:57.249 STDOUT terraform:  + id = (known after apply) 2025-10-08 14:54:57.262110 | orchestrator | 14:54:57.250 STDOUT terraform:  + protocol = "112" 2025-10-08 14:54:57.262158 | orchestrator | 14:54:57.261 STDOUT terraform:  + region = (known after apply) 2025-10-08 14:54:57.262163 | orchestrator | 14:54:57.261 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-10-08 14:54:57.262180 | orchestrator | 14:54:57.261 STDOUT terraform:  + remote_group_id = (known after apply) 2025-10-08 14:54:57.262184 | orchestrator | 14:54:57.261 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-10-08 14:54:57.262188 | orchestrator | 14:54:57.261 STDOUT terraform:  + security_group_id = (known after apply) 2025-10-08 14:54:57.262192 | orchestrator | 14:54:57.261 STDOUT terraform:  + tenant_id = (known after apply) 2025-10-08 14:54:57.262196 | orchestrator | 14:54:57.261 STDOUT terraform:  } 2025-10-08 14:54:57.262200 | orchestrator | 14:54:57.261 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_management will be created 2025-10-08 14:54:57.262205 | orchestrator | 14:54:57.261 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_management" { 2025-10-08 14:54:57.262210 | orchestrator | 14:54:57.261 STDOUT terraform:  + all_tags = (known after apply) 2025-10-08 14:54:57.262214 | orchestrator | 14:54:57.261 STDOUT terraform:  + description = "management security group" 2025-10-08 14:54:57.262217 | orchestrator | 14:54:57.261 STDOUT terraform:  + id = (known after apply) 2025-10-08 14:54:57.262221 | orchestrator | 14:54:57.261 STDOUT terraform:  + name = "testbed-management" 2025-10-08 14:54:57.262225 | orchestrator | 14:54:57.261 STDOUT terraform:  + region = (known after apply) 2025-10-08 14:54:57.262229 | orchestrator | 14:54:57.262 STDOUT terraform:  + stateful = (known after apply) 2025-10-08 14:54:57.262233 | orchestrator | 14:54:57.262 STDOUT terraform:  + tenant_id = (known after apply) 2025-10-08 14:54:57.262242 | orchestrator | 14:54:57.262 STDOUT terraform:  } 2025-10-08 14:54:57.262246 | orchestrator | 14:54:57.262 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_node will be created 2025-10-08 14:54:57.262250 | orchestrator | 14:54:57.262 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_node" { 2025-10-08 14:54:57.262254 | orchestrator | 14:54:57.262 STDOUT terraform:  + all_tags = (known after apply) 2025-10-08 14:54:57.262258 | orchestrator | 14:54:57.262 STDOUT terraform:  + description = "node security group" 2025-10-08 14:54:57.262261 | orchestrator | 14:54:57.262 STDOUT terraform:  + id = (known after apply) 2025-10-08 14:54:57.262267 | orchestrator | 14:54:57.262 STDOUT terraform:  + name = "testbed-node" 2025-10-08 14:54:57.262271 | orchestrator | 14:54:57.262 STDOUT terraform:  + region = (known after apply) 2025-10-08 14:54:57.263448 | orchestrator | 14:54:57.262 STDOUT terraform:  + stateful = (known after apply) 2025-10-08 14:54:57.263611 | orchestrator | 14:54:57.262 STDOUT terraform:  + tenant_id = (known after apply) 2025-10-08 14:54:57.263624 | orchestrator | 14:54:57.262 STDOUT terraform:  } 2025-10-08 14:54:57.263629 | orchestrator | 14:54:57.262 STDOUT terraform:  # openstack_networking_subnet_v2.subnet_management will be created 2025-10-08 14:54:57.263634 | orchestrator | 14:54:57.262 STDOUT terraform:  + resource "openstack_networking_subnet_v2" "subnet_management" { 2025-10-08 14:54:57.263638 | orchestrator | 14:54:57.262 STDOUT terraform:  + all_tags = (known after apply) 2025-10-08 14:54:57.263642 | orchestrator | 14:54:57.262 STDOUT terraform:  + cidr = "192.168.16.0/20" 2025-10-08 14:54:57.263645 | orchestrator | 14:54:57.262 STDOUT terraform:  + dns_nameservers = [ 2025-10-08 14:54:57.263660 | orchestrator | 14:54:57.262 STDOUT terraform:  + "8.8.8.8", 2025-10-08 14:54:57.263663 | orchestrator | 14:54:57.262 STDOUT terraform:  + "9.9.9.9", 2025-10-08 14:54:57.263667 | orchestrator | 14:54:57.262 STDOUT terraform:  ] 2025-10-08 14:54:57.263671 | orchestrator | 14:54:57.262 STDOUT terraform:  + enable_dhcp = true 2025-10-08 14:54:57.263675 | orchestrator | 14:54:57.262 STDOUT terraform:  + gateway_ip = (known after apply) 2025-10-08 14:54:57.263678 | orchestrator | 14:54:57.262 STDOUT terraform:  + id = (known after apply) 2025-10-08 14:54:57.263682 | orchestrator | 14:54:57.262 STDOUT terraform:  + ip_version = 4 2025-10-08 14:54:57.263686 | orchestrator | 14:54:57.262 STDOUT terraform:  + ipv6_address_mode = (known after apply) 2025-10-08 14:54:57.263690 | orchestrator | 14:54:57.262 STDOUT terraform:  + ipv6_ra_mode = (known after apply) 2025-10-08 14:54:57.263693 | orchestrator | 14:54:57.262 STDOUT terraform:  + name = "subnet-testbed-management" 2025-10-08 14:54:57.263697 | orchestrator | 14:54:57.262 STDOUT terraform:  + network_id = (known after apply) 2025-10-08 14:54:57.263701 | orchestrator | 14:54:57.262 STDOUT terraform:  + no_gateway = false 2025-10-08 14:54:57.263705 | orchestrator | 14:54:57.262 STDOUT terraform:  + region = (known after apply) 2025-10-08 14:54:57.263727 | orchestrator | 14:54:57.262 STDOUT terraform:  + service_types = (known after apply) 2025-10-08 14:54:57.263731 | orchestrator | 14:54:57.262 STDOUT terraform:  + tenant_id = (known after apply) 2025-10-08 14:54:57.263735 | orchestrator | 14:54:57.262 STDOUT terraform:  + allocation_pool { 2025-10-08 14:54:57.263741 | orchestrator | 14:54:57.262 STDOUT terraform:  + end = "192.168.31.250" 2025-10-08 14:54:57.263745 | orchestrator | 14:54:57.262 STDOUT terraform:  + start = "192.168.31.200" 2025-10-08 14:54:57.263749 | orchestrator | 14:54:57.262 STDOUT terraform:  } 2025-10-08 14:54:57.263753 | orchestrator | 14:54:57.262 STDOUT terraform:  } 2025-10-08 14:54:57.263756 | orchestrator | 14:54:57.262 STDOUT terraform:  # terraform_data.image will be created 2025-10-08 14:54:57.263760 | orchestrator | 14:54:57.262 STDOUT terraform:  + resource "terraform_data" "image" { 2025-10-08 14:54:57.263764 | orchestrator | 14:54:57.262 STDOUT terraform:  + id = (known after apply) 2025-10-08 14:54:57.263768 | orchestrator | 14:54:57.262 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-10-08 14:54:57.263771 | orchestrator | 14:54:57.262 STDOUT terraform:  + output = (known after apply) 2025-10-08 14:54:57.263775 | orchestrator | 14:54:57.262 STDOUT terraform:  } 2025-10-08 14:54:57.263779 | orchestrator | 14:54:57.262 STDOUT terraform:  # terraform_data.image_node will be created 2025-10-08 14:54:57.263782 | orchestrator | 14:54:57.262 STDOUT terraform:  + resource "terraform_data" "image_node" { 2025-10-08 14:54:57.263786 | orchestrator | 14:54:57.263 STDOUT terraform:  + id = (known after apply) 2025-10-08 14:54:57.263790 | orchestrator | 14:54:57.263 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-10-08 14:54:57.263794 | orchestrator | 14:54:57.263 STDOUT terraform:  + output = (known after apply) 2025-10-08 14:54:57.263798 | orchestrator | 14:54:57.263 STDOUT terraform:  } 2025-10-08 14:54:57.263813 | orchestrator | 14:54:57.263 STDOUT terraform: Plan: 64 to add, 0 to change, 0 to destroy. 2025-10-08 14:54:57.263817 | orchestrator | 14:54:57.263 STDOUT terraform: Changes to Outputs: 2025-10-08 14:54:57.263821 | orchestrator | 14:54:57.263 STDOUT terraform:  + manager_address = (sensitive value) 2025-10-08 14:54:57.263825 | orchestrator | 14:54:57.263 STDOUT terraform:  + private_key = (sensitive value) 2025-10-08 14:54:57.461211 | orchestrator | 14:54:57.458 STDOUT terraform: terraform_data.image_node: Creating... 2025-10-08 14:54:57.461467 | orchestrator | 14:54:57.458 STDOUT terraform: terraform_data.image_node: Creation complete after 0s [id=dbf50c34-747d-c015-1359-cb819c01eda0] 2025-10-08 14:54:57.461489 | orchestrator | 14:54:57.458 STDOUT terraform: terraform_data.image: Creating... 2025-10-08 14:54:57.461502 | orchestrator | 14:54:57.458 STDOUT terraform: terraform_data.image: Creation complete after 0s [id=81afe776-e16c-f3a8-0399-d394b1bc5698] 2025-10-08 14:54:57.478123 | orchestrator | 14:54:57.476 STDOUT terraform: data.openstack_images_image_v2.image_node: Reading... 2025-10-08 14:54:57.489679 | orchestrator | 14:54:57.489 STDOUT terraform: data.openstack_images_image_v2.image: Reading... 2025-10-08 14:54:57.489733 | orchestrator | 14:54:57.489 STDOUT terraform: openstack_compute_keypair_v2.key: Creating... 2025-10-08 14:54:57.492944 | orchestrator | 14:54:57.492 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2025-10-08 14:54:57.497615 | orchestrator | 14:54:57.497 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2025-10-08 14:54:57.498061 | orchestrator | 14:54:57.497 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2025-10-08 14:54:57.498121 | orchestrator | 14:54:57.498 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2025-10-08 14:54:57.498727 | orchestrator | 14:54:57.498 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2025-10-08 14:54:57.498967 | orchestrator | 14:54:57.498 STDOUT terraform: openstack_networking_network_v2.net_management: Creating... 2025-10-08 14:54:57.511804 | orchestrator | 14:54:57.510 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2025-10-08 14:54:57.927552 | orchestrator | 14:54:57.925 STDOUT terraform: data.openstack_images_image_v2.image: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2025-10-08 14:54:57.935660 | orchestrator | 14:54:57.935 STDOUT terraform: data.openstack_images_image_v2.image_node: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2025-10-08 14:54:57.935836 | orchestrator | 14:54:57.935 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2025-10-08 14:54:57.943472 | orchestrator | 14:54:57.943 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2025-10-08 14:54:58.033152 | orchestrator | 14:54:58.032 STDOUT terraform: openstack_compute_keypair_v2.key: Creation complete after 1s [id=testbed] 2025-10-08 14:54:58.039494 | orchestrator | 14:54:58.039 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2025-10-08 14:54:58.951807 | orchestrator | 14:54:58.951 STDOUT terraform: openstack_networking_network_v2.net_management: Creation complete after 2s [id=7479395b-9378-4d67-a874-54848754eeb8] 2025-10-08 14:54:58.963776 | orchestrator | 14:54:58.963 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2025-10-08 14:55:01.065403 | orchestrator | 14:55:01.065 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 4s [id=b2b0a7c4-684b-467f-bea3-a2180df0d298] 2025-10-08 14:55:01.078308 | orchestrator | 14:55:01.078 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2025-10-08 14:55:01.099513 | orchestrator | 14:55:01.099 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 4s [id=5c2995ee-2a0f-4f5c-ac7c-066cefbff021] 2025-10-08 14:55:01.104389 | orchestrator | 14:55:01.104 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2025-10-08 14:55:01.121864 | orchestrator | 14:55:01.121 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 4s [id=96ffedcc-0414-421a-b44e-b183e9db41fd] 2025-10-08 14:55:01.123815 | orchestrator | 14:55:01.123 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 4s [id=8931d93f-304b-4b68-94eb-87cca6c6eade] 2025-10-08 14:55:01.132656 | orchestrator | 14:55:01.132 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2025-10-08 14:55:01.135698 | orchestrator | 14:55:01.135 STDOUT terraform: local_sensitive_file.id_rsa: Creating... 2025-10-08 14:55:01.142895 | orchestrator | 14:55:01.142 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 4s [id=d1c70491-aba4-4ff7-8b88-cbd07cfcddb1] 2025-10-08 14:55:01.147536 | orchestrator | 14:55:01.147 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 3s [id=e7b164cd-18c4-4443-b153-66ef822cc182] 2025-10-08 14:55:01.148077 | orchestrator | 14:55:01.147 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2025-10-08 14:55:01.151271 | orchestrator | 14:55:01.151 STDOUT terraform: local_sensitive_file.id_rsa: Creation complete after 0s [id=5e0ff1fd67cbbebb8d3b86c79787b14ee9d63c06] 2025-10-08 14:55:01.151905 | orchestrator | 14:55:01.151 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2025-10-08 14:55:01.165710 | orchestrator | 14:55:01.165 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2025-10-08 14:55:01.201736 | orchestrator | 14:55:01.201 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 3s [id=501046cd-0181-4267-8e39-455d7db25dff] 2025-10-08 14:55:01.207154 | orchestrator | 14:55:01.206 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 3s [id=f279d016-8061-4f1a-b5de-972c25793956] 2025-10-08 14:55:01.220477 | orchestrator | 14:55:01.220 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creating... 2025-10-08 14:55:01.220862 | orchestrator | 14:55:01.220 STDOUT terraform: local_file.id_rsa_pub: Creating... 2025-10-08 14:55:01.225797 | orchestrator | 14:55:01.225 STDOUT terraform: local_file.id_rsa_pub: Creation complete after 0s [id=c70d1d90414d169ae37e8962e26084895af9e88b] 2025-10-08 14:55:01.262625 | orchestrator | 14:55:01.262 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 3s [id=1a05d404-282d-4245-ac59-6a85ac73ef0f] 2025-10-08 14:55:02.158761 | orchestrator | 14:55:02.158 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creation complete after 1s [id=c543c933-8d3f-4415-8fde-c719aa9ce283] 2025-10-08 14:55:02.165212 | orchestrator | 14:55:02.165 STDOUT terraform: openstack_networking_router_v2.router: Creating... 2025-10-08 14:55:02.284624 | orchestrator | 14:55:02.284 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 3s [id=a1b63dcb-30b6-4b45-95f0-a27c5251f615] 2025-10-08 14:55:04.441880 | orchestrator | 14:55:04.441 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 3s [id=d5106b6d-a2b1-46e7-8a70-2ffa10fa4fd8] 2025-10-08 14:55:04.532756 | orchestrator | 14:55:04.532 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 4s [id=a79e8257-b31c-4b26-9d3c-62ccc1082da3] 2025-10-08 14:55:04.580209 | orchestrator | 14:55:04.579 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 4s [id=1cc5cf69-2914-42bf-9b1b-88c775b3ec52] 2025-10-08 14:55:04.583088 | orchestrator | 14:55:04.582 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 4s [id=6baf8ee8-9e1e-44df-871f-0b875401fb68] 2025-10-08 14:55:04.593819 | orchestrator | 14:55:04.593 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 4s [id=704d8bac-66ec-438a-bbe7-82d4aba4ca14] 2025-10-08 14:55:04.703213 | orchestrator | 14:55:04.702 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 4s [id=ddfc7ab9-c1a9-4ba0-9d6e-381eb20f74ec] 2025-10-08 14:55:05.344687 | orchestrator | 14:55:05.344 STDOUT terraform: openstack_networking_router_v2.router: Creation complete after 3s [id=b2153bd0-8b54-4acc-8d56-10da6549b265] 2025-10-08 14:55:05.350401 | orchestrator | 14:55:05.350 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creating... 2025-10-08 14:55:05.352037 | orchestrator | 14:55:05.351 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creating... 2025-10-08 14:55:05.357605 | orchestrator | 14:55:05.357 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creating... 2025-10-08 14:55:05.549891 | orchestrator | 14:55:05.549 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creation complete after 1s [id=ae4dfaf8-8d17-4cff-a867-cef27e244ca4] 2025-10-08 14:55:05.557750 | orchestrator | 14:55:05.557 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2025-10-08 14:55:05.557956 | orchestrator | 14:55:05.557 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2025-10-08 14:55:05.558646 | orchestrator | 14:55:05.558 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2025-10-08 14:55:05.559616 | orchestrator | 14:55:05.559 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2025-10-08 14:55:05.559630 | orchestrator | 14:55:05.559 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2025-10-08 14:55:05.565793 | orchestrator | 14:55:05.565 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creating... 2025-10-08 14:55:05.591937 | orchestrator | 14:55:05.591 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creation complete after 1s [id=9395be58-5877-4585-abab-6e8fa01340a7] 2025-10-08 14:55:05.597043 | orchestrator | 14:55:05.596 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2025-10-08 14:55:05.597099 | orchestrator | 14:55:05.596 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2025-10-08 14:55:05.598140 | orchestrator | 14:55:05.597 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2025-10-08 14:55:05.727304 | orchestrator | 14:55:05.727 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 0s [id=f14bde7d-da60-427e-8d2e-3540f57cfccb] 2025-10-08 14:55:05.732761 | orchestrator | 14:55:05.732 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2025-10-08 14:55:05.777485 | orchestrator | 14:55:05.777 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 0s [id=37766782-b3e5-40e0-90da-90956c5b8d19] 2025-10-08 14:55:05.792338 | orchestrator | 14:55:05.792 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creating... 2025-10-08 14:55:05.872264 | orchestrator | 14:55:05.871 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 0s [id=8d27590d-46ff-4222-8a76-76aadab92368] 2025-10-08 14:55:05.882097 | orchestrator | 14:55:05.881 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creating... 2025-10-08 14:55:05.931246 | orchestrator | 14:55:05.930 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 0s [id=70e3954d-4e59-4a65-9d0f-0c1d8831a97c] 2025-10-08 14:55:05.942671 | orchestrator | 14:55:05.942 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creating... 2025-10-08 14:55:06.065435 | orchestrator | 14:55:06.065 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 0s [id=9d8b708a-ac99-43a7-b987-7ad075c970d0] 2025-10-08 14:55:06.078301 | orchestrator | 14:55:06.078 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creating... 2025-10-08 14:55:06.099029 | orchestrator | 14:55:06.098 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 0s [id=ec98b2f8-8170-41ed-bb08-717cca572d02] 2025-10-08 14:55:06.109170 | orchestrator | 14:55:06.109 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creating... 2025-10-08 14:55:06.252531 | orchestrator | 14:55:06.252 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creation complete after 0s [id=f31a03d2-74ce-48af-a9ed-496b9001baa1] 2025-10-08 14:55:06.264664 | orchestrator | 14:55:06.264 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creating... 2025-10-08 14:55:06.296745 | orchestrator | 14:55:06.296 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 0s [id=f99ecfda-206b-4ba2-83e6-cfae4b02d531] 2025-10-08 14:55:06.444387 | orchestrator | 14:55:06.444 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 0s [id=523b3706-8591-4efd-b2e2-bfa80fb2ce71] 2025-10-08 14:55:06.473629 | orchestrator | 14:55:06.473 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 0s [id=3614c83b-a969-4d10-86f6-bf058fbe238b] 2025-10-08 14:55:06.785122 | orchestrator | 14:55:06.784 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creation complete after 1s [id=aa6c3f92-5c53-4d4e-acee-05b15a5c26cb] 2025-10-08 14:55:06.808728 | orchestrator | 14:55:06.808 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creation complete after 1s [id=77271b21-0285-4f16-8292-94cc8b0e2e18] 2025-10-08 14:55:06.820275 | orchestrator | 14:55:06.820 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creation complete after 1s [id=b15d084f-6e2b-40ad-9844-590a4a256e6f] 2025-10-08 14:55:06.879957 | orchestrator | 14:55:06.879 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creation complete after 1s [id=9b81b530-8a55-43de-a786-ac4088fc75ce] 2025-10-08 14:55:07.009678 | orchestrator | 14:55:07.009 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creation complete after 1s [id=f7f4b3e5-eca5-4c4e-95fe-64c3e267a9fa] 2025-10-08 14:55:07.125896 | orchestrator | 14:55:07.125 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creation complete after 1s [id=76966401-2a8c-4a91-92fb-384931771a13] 2025-10-08 14:55:10.708399 | orchestrator | 14:55:10.707 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creation complete after 6s [id=45c53ae8-074f-4c6b-92aa-13000885e45b] 2025-10-08 14:55:10.733395 | orchestrator | 14:55:10.733 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2025-10-08 14:55:10.735818 | orchestrator | 14:55:10.735 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creating... 2025-10-08 14:55:10.745053 | orchestrator | 14:55:10.744 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creating... 2025-10-08 14:55:10.750110 | orchestrator | 14:55:10.750 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creating... 2025-10-08 14:55:10.750471 | orchestrator | 14:55:10.750 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creating... 2025-10-08 14:55:10.750484 | orchestrator | 14:55:10.750 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creating... 2025-10-08 14:55:10.762299 | orchestrator | 14:55:10.762 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creating... 2025-10-08 14:55:12.282267 | orchestrator | 14:55:12.280 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 1s [id=0652a4d6-8cd6-4111-b4dd-5e8e93bb4202] 2025-10-08 14:55:12.291834 | orchestrator | 14:55:12.291 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2025-10-08 14:55:12.300577 | orchestrator | 14:55:12.300 STDOUT terraform: local_file.inventory: Creating... 2025-10-08 14:55:12.300662 | orchestrator | 14:55:12.300 STDOUT terraform: local_file.MANAGER_ADDRESS: Creating... 2025-10-08 14:55:12.307609 | orchestrator | 14:55:12.307 STDOUT terraform: local_file.MANAGER_ADDRESS: Creation complete after 0s [id=0acbed36f9e6604dcd303662a4210415ea031626] 2025-10-08 14:55:12.307922 | orchestrator | 14:55:12.307 STDOUT terraform: local_file.inventory: Creation complete after 0s [id=672d7c2a13392d82ca908e977dfc19579dfb384c] 2025-10-08 14:55:13.053972 | orchestrator | 14:55:13.053 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=0652a4d6-8cd6-4111-b4dd-5e8e93bb4202] 2025-10-08 14:55:20.737748 | orchestrator | 14:55:20.737 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2025-10-08 14:55:20.748807 | orchestrator | 14:55:20.748 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2025-10-08 14:55:20.752188 | orchestrator | 14:55:20.751 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2025-10-08 14:55:20.752293 | orchestrator | 14:55:20.752 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2025-10-08 14:55:20.752361 | orchestrator | 14:55:20.752 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2025-10-08 14:55:20.763337 | orchestrator | 14:55:20.763 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2025-10-08 14:55:30.738578 | orchestrator | 14:55:30.738 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2025-10-08 14:55:30.749642 | orchestrator | 14:55:30.749 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2025-10-08 14:55:30.752875 | orchestrator | 14:55:30.752 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2025-10-08 14:55:30.753055 | orchestrator | 14:55:30.752 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2025-10-08 14:55:30.753156 | orchestrator | 14:55:30.752 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2025-10-08 14:55:30.764644 | orchestrator | 14:55:30.764 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2025-10-08 14:55:40.739412 | orchestrator | 14:55:40.739 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2025-10-08 14:55:40.750381 | orchestrator | 14:55:40.750 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2025-10-08 14:55:40.753661 | orchestrator | 14:55:40.753 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2025-10-08 14:55:40.753753 | orchestrator | 14:55:40.753 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2025-10-08 14:55:40.753922 | orchestrator | 14:55:40.753 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2025-10-08 14:55:40.765076 | orchestrator | 14:55:40.764 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2025-10-08 14:55:41.397766 | orchestrator | 14:55:41.397 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creation complete after 30s [id=6c42537f-8b24-4504-8f7a-d1af226c1286] 2025-10-08 14:55:41.437460 | orchestrator | 14:55:41.437 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creation complete after 30s [id=0b38244e-650d-4440-bcc1-7efbef9362e5] 2025-10-08 14:55:50.740039 | orchestrator | 14:55:50.739 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [40s elapsed] 2025-10-08 14:55:50.751410 | orchestrator | 14:55:50.751 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [40s elapsed] 2025-10-08 14:55:50.754432 | orchestrator | 14:55:50.754 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [40s elapsed] 2025-10-08 14:55:50.765590 | orchestrator | 14:55:50.765 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [40s elapsed] 2025-10-08 14:55:51.398306 | orchestrator | 14:55:51.397 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creation complete after 40s [id=9a4fed54-cd98-4d4b-869e-665fff92f438] 2025-10-08 14:55:51.461168 | orchestrator | 14:55:51.457 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creation complete after 40s [id=67e2dc80-7f51-459e-9675-01893c96d56f] 2025-10-08 14:55:51.633720 | orchestrator | 14:55:51.633 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creation complete after 41s [id=c783e776-0046-4d48-aad1-2fc84262f658] 2025-10-08 14:55:51.740226 | orchestrator | 14:55:51.739 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creation complete after 41s [id=22906030-1236-4f9e-a4c9-bde9e906fea3] 2025-10-08 14:55:51.776505 | orchestrator | 14:55:51.776 STDOUT terraform: null_resource.node_semaphore: Creating... 2025-10-08 14:55:51.776821 | orchestrator | 14:55:51.776 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2025-10-08 14:55:51.779224 | orchestrator | 14:55:51.779 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2025-10-08 14:55:51.782637 | orchestrator | 14:55:51.781 STDOUT terraform: null_resource.node_semaphore: Creation complete after 0s [id=5831678808896341845] 2025-10-08 14:55:51.782680 | orchestrator | 14:55:51.782 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2025-10-08 14:55:51.783464 | orchestrator | 14:55:51.782 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2025-10-08 14:55:51.788335 | orchestrator | 14:55:51.788 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2025-10-08 14:55:51.789253 | orchestrator | 14:55:51.789 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2025-10-08 14:55:51.792586 | orchestrator | 14:55:51.791 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2025-10-08 14:55:51.796598 | orchestrator | 14:55:51.796 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2025-10-08 14:55:51.803531 | orchestrator | 14:55:51.803 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2025-10-08 14:55:51.809907 | orchestrator | 14:55:51.809 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creating... 2025-10-08 14:55:55.172752 | orchestrator | 14:55:55.172 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 3s [id=6c42537f-8b24-4504-8f7a-d1af226c1286/5c2995ee-2a0f-4f5c-ac7c-066cefbff021] 2025-10-08 14:55:55.178154 | orchestrator | 14:55:55.177 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 3s [id=c783e776-0046-4d48-aad1-2fc84262f658/1a05d404-282d-4245-ac59-6a85ac73ef0f] 2025-10-08 14:55:55.208126 | orchestrator | 14:55:55.207 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 3s [id=c783e776-0046-4d48-aad1-2fc84262f658/d1c70491-aba4-4ff7-8b88-cbd07cfcddb1] 2025-10-08 14:55:55.225676 | orchestrator | 14:55:55.225 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 3s [id=22906030-1236-4f9e-a4c9-bde9e906fea3/b2b0a7c4-684b-467f-bea3-a2180df0d298] 2025-10-08 14:55:55.252202 | orchestrator | 14:55:55.251 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 3s [id=22906030-1236-4f9e-a4c9-bde9e906fea3/501046cd-0181-4267-8e39-455d7db25dff] 2025-10-08 14:55:56.887677 | orchestrator | 14:55:56.887 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 5s [id=c783e776-0046-4d48-aad1-2fc84262f658/96ffedcc-0414-421a-b44e-b183e9db41fd] 2025-10-08 14:56:01.267773 | orchestrator | 14:56:01.267 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 9s [id=6c42537f-8b24-4504-8f7a-d1af226c1286/f279d016-8061-4f1a-b5de-972c25793956] 2025-10-08 14:56:01.353908 | orchestrator | 14:56:01.353 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 9s [id=6c42537f-8b24-4504-8f7a-d1af226c1286/8931d93f-304b-4b68-94eb-87cca6c6eade] 2025-10-08 14:56:01.388111 | orchestrator | 14:56:01.387 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 9s [id=22906030-1236-4f9e-a4c9-bde9e906fea3/e7b164cd-18c4-4443-b153-66ef822cc182] 2025-10-08 14:56:01.812464 | orchestrator | 14:56:01.812 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2025-10-08 14:56:11.812853 | orchestrator | 14:56:11.812 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2025-10-08 14:56:12.074632 | orchestrator | 14:56:12.073 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creation complete after 20s [id=0deb9c1b-78e9-4320-903f-9af11a475395] 2025-10-08 14:56:12.097304 | orchestrator | 14:56:12.097 STDOUT terraform: Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2025-10-08 14:56:12.097369 | orchestrator | 14:56:12.097 STDOUT terraform: Outputs: 2025-10-08 14:56:12.097380 | orchestrator | 14:56:12.097 STDOUT terraform: manager_address = 2025-10-08 14:56:12.097388 | orchestrator | 14:56:12.097 STDOUT terraform: private_key = 2025-10-08 14:56:12.547580 | orchestrator | ok: Runtime: 0:01:20.690047 2025-10-08 14:56:12.582237 | 2025-10-08 14:56:12.582362 | TASK [Create infrastructure (stable)] 2025-10-08 14:56:13.115329 | orchestrator | skipping: Conditional result was False 2025-10-08 14:56:13.140403 | 2025-10-08 14:56:13.140609 | TASK [Fetch manager address] 2025-10-08 14:56:13.574618 | orchestrator | ok 2025-10-08 14:56:13.583761 | 2025-10-08 14:56:13.583876 | TASK [Set manager_host address] 2025-10-08 14:56:13.660107 | orchestrator | ok 2025-10-08 14:56:13.668812 | 2025-10-08 14:56:13.668931 | LOOP [Update ansible collections] 2025-10-08 14:56:15.333830 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-10-08 14:56:15.334291 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-10-08 14:56:15.334358 | orchestrator | Starting galaxy collection install process 2025-10-08 14:56:15.334398 | orchestrator | Process install dependency map 2025-10-08 14:56:15.334434 | orchestrator | Starting collection install process 2025-10-08 14:56:15.334470 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed05/.ansible/collections/ansible_collections/osism/commons' 2025-10-08 14:56:15.334508 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed05/.ansible/collections/ansible_collections/osism/commons 2025-10-08 14:56:15.334546 | orchestrator | osism.commons:999.0.0 was installed successfully 2025-10-08 14:56:15.334610 | orchestrator | ok: Item: commons Runtime: 0:00:01.375902 2025-10-08 14:56:16.142636 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-10-08 14:56:16.142808 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-10-08 14:56:16.142912 | orchestrator | Starting galaxy collection install process 2025-10-08 14:56:16.142954 | orchestrator | Process install dependency map 2025-10-08 14:56:16.142990 | orchestrator | Starting collection install process 2025-10-08 14:56:16.143025 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed05/.ansible/collections/ansible_collections/osism/services' 2025-10-08 14:56:16.143060 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed05/.ansible/collections/ansible_collections/osism/services 2025-10-08 14:56:16.143093 | orchestrator | osism.services:999.0.0 was installed successfully 2025-10-08 14:56:16.143146 | orchestrator | ok: Item: services Runtime: 0:00:00.566164 2025-10-08 14:56:16.160314 | 2025-10-08 14:56:16.160457 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-10-08 14:56:26.708817 | orchestrator | ok 2025-10-08 14:56:26.717120 | 2025-10-08 14:56:26.717274 | TASK [Wait a little longer for the manager so that everything is ready] 2025-10-08 14:57:26.760520 | orchestrator | ok 2025-10-08 14:57:26.770498 | 2025-10-08 14:57:26.770622 | TASK [Fetch manager ssh hostkey] 2025-10-08 14:57:28.339765 | orchestrator | Output suppressed because no_log was given 2025-10-08 14:57:28.356528 | 2025-10-08 14:57:28.356683 | TASK [Get ssh keypair from terraform environment] 2025-10-08 14:57:28.891980 | orchestrator | ok: Runtime: 0:00:00.006654 2025-10-08 14:57:28.908817 | 2025-10-08 14:57:28.908988 | TASK [Point out that the following task takes some time and does not give any output] 2025-10-08 14:57:28.957571 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2025-10-08 14:57:28.967299 | 2025-10-08 14:57:28.967419 | TASK [Run manager part 0] 2025-10-08 14:57:30.402588 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-10-08 14:57:30.455295 | orchestrator | 2025-10-08 14:57:30.455344 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2025-10-08 14:57:30.455353 | orchestrator | 2025-10-08 14:57:30.455368 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2025-10-08 14:57:32.283092 | orchestrator | ok: [testbed-manager] 2025-10-08 14:57:32.283166 | orchestrator | 2025-10-08 14:57:32.283207 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-10-08 14:57:32.283225 | orchestrator | 2025-10-08 14:57:32.283242 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-10-08 14:57:34.744324 | orchestrator | ok: [testbed-manager] 2025-10-08 14:57:34.744429 | orchestrator | 2025-10-08 14:57:34.744438 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-10-08 14:57:35.380846 | orchestrator | ok: [testbed-manager] 2025-10-08 14:57:35.380895 | orchestrator | 2025-10-08 14:57:35.380903 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-10-08 14:57:35.429628 | orchestrator | skipping: [testbed-manager] 2025-10-08 14:57:35.429700 | orchestrator | 2025-10-08 14:57:35.429719 | orchestrator | TASK [Update package cache] **************************************************** 2025-10-08 14:57:35.454490 | orchestrator | skipping: [testbed-manager] 2025-10-08 14:57:35.454545 | orchestrator | 2025-10-08 14:57:35.454570 | orchestrator | TASK [Install required packages] *********************************************** 2025-10-08 14:57:35.480009 | orchestrator | skipping: [testbed-manager] 2025-10-08 14:57:35.480078 | orchestrator | 2025-10-08 14:57:35.480090 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-10-08 14:57:35.506959 | orchestrator | skipping: [testbed-manager] 2025-10-08 14:57:35.507001 | orchestrator | 2025-10-08 14:57:35.507008 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-10-08 14:57:35.532506 | orchestrator | skipping: [testbed-manager] 2025-10-08 14:57:35.532540 | orchestrator | 2025-10-08 14:57:35.532547 | orchestrator | TASK [Fail if Ubuntu version is lower than 22.04] ****************************** 2025-10-08 14:57:35.560848 | orchestrator | skipping: [testbed-manager] 2025-10-08 14:57:35.560904 | orchestrator | 2025-10-08 14:57:35.560916 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2025-10-08 14:57:35.597256 | orchestrator | skipping: [testbed-manager] 2025-10-08 14:57:35.597340 | orchestrator | 2025-10-08 14:57:35.597362 | orchestrator | TASK [Set APT options on manager] ********************************************** 2025-10-08 14:57:36.430898 | orchestrator | changed: [testbed-manager] 2025-10-08 14:57:36.430951 | orchestrator | 2025-10-08 14:57:36.430958 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2025-10-08 15:00:19.850231 | orchestrator | changed: [testbed-manager] 2025-10-08 15:00:19.850337 | orchestrator | 2025-10-08 15:00:19.850355 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-10-08 15:01:39.511626 | orchestrator | changed: [testbed-manager] 2025-10-08 15:01:39.511719 | orchestrator | 2025-10-08 15:01:39.511742 | orchestrator | TASK [Install required packages] *********************************************** 2025-10-08 15:02:00.731255 | orchestrator | changed: [testbed-manager] 2025-10-08 15:02:00.731350 | orchestrator | 2025-10-08 15:02:00.731369 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-10-08 15:02:10.303906 | orchestrator | changed: [testbed-manager] 2025-10-08 15:02:10.303952 | orchestrator | 2025-10-08 15:02:10.303961 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-10-08 15:02:10.351211 | orchestrator | ok: [testbed-manager] 2025-10-08 15:02:10.351290 | orchestrator | 2025-10-08 15:02:10.351305 | orchestrator | TASK [Get current user] ******************************************************** 2025-10-08 15:02:11.147784 | orchestrator | ok: [testbed-manager] 2025-10-08 15:02:11.147876 | orchestrator | 2025-10-08 15:02:11.147894 | orchestrator | TASK [Create venv directory] *************************************************** 2025-10-08 15:02:11.922693 | orchestrator | changed: [testbed-manager] 2025-10-08 15:02:11.922778 | orchestrator | 2025-10-08 15:02:11.922793 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2025-10-08 15:02:18.433056 | orchestrator | changed: [testbed-manager] 2025-10-08 15:02:18.433153 | orchestrator | 2025-10-08 15:02:18.433196 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2025-10-08 15:02:24.722393 | orchestrator | changed: [testbed-manager] 2025-10-08 15:02:24.722490 | orchestrator | 2025-10-08 15:02:24.722509 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2025-10-08 15:02:27.664888 | orchestrator | changed: [testbed-manager] 2025-10-08 15:02:27.664929 | orchestrator | 2025-10-08 15:02:27.664937 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2025-10-08 15:02:29.564587 | orchestrator | changed: [testbed-manager] 2025-10-08 15:02:29.564656 | orchestrator | 2025-10-08 15:02:29.564664 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2025-10-08 15:02:30.737068 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-10-08 15:02:30.737165 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-10-08 15:02:30.737180 | orchestrator | 2025-10-08 15:02:30.737192 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2025-10-08 15:02:30.775163 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-10-08 15:02:30.775236 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-10-08 15:02:30.775250 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-10-08 15:02:30.775262 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-10-08 15:02:39.790755 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-10-08 15:02:39.790798 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-10-08 15:02:39.790804 | orchestrator | 2025-10-08 15:02:39.790810 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2025-10-08 15:02:40.376138 | orchestrator | changed: [testbed-manager] 2025-10-08 15:02:40.376220 | orchestrator | 2025-10-08 15:02:40.376234 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2025-10-08 15:06:04.455501 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2025-10-08 15:06:04.455616 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2025-10-08 15:06:04.455631 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2025-10-08 15:06:04.455642 | orchestrator | 2025-10-08 15:06:04.455653 | orchestrator | TASK [Install local collections] *********************************************** 2025-10-08 15:06:06.986957 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2025-10-08 15:06:06.986993 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2025-10-08 15:06:06.986998 | orchestrator | 2025-10-08 15:06:06.987003 | orchestrator | PLAY [Create operator user] **************************************************** 2025-10-08 15:06:06.987009 | orchestrator | 2025-10-08 15:06:06.987013 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-10-08 15:06:08.483766 | orchestrator | ok: [testbed-manager] 2025-10-08 15:06:08.483802 | orchestrator | 2025-10-08 15:06:08.483809 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-10-08 15:06:08.531820 | orchestrator | ok: [testbed-manager] 2025-10-08 15:06:08.531862 | orchestrator | 2025-10-08 15:06:08.531870 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-10-08 15:06:08.600851 | orchestrator | ok: [testbed-manager] 2025-10-08 15:06:08.600891 | orchestrator | 2025-10-08 15:06:08.600899 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-10-08 15:06:09.392022 | orchestrator | changed: [testbed-manager] 2025-10-08 15:06:09.392110 | orchestrator | 2025-10-08 15:06:09.392136 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-10-08 15:06:10.231512 | orchestrator | changed: [testbed-manager] 2025-10-08 15:06:10.231555 | orchestrator | 2025-10-08 15:06:10.231563 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-10-08 15:06:11.711046 | orchestrator | changed: [testbed-manager] => (item=adm) 2025-10-08 15:06:11.711123 | orchestrator | changed: [testbed-manager] => (item=sudo) 2025-10-08 15:06:11.711137 | orchestrator | 2025-10-08 15:06:11.711163 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-10-08 15:06:13.140036 | orchestrator | changed: [testbed-manager] 2025-10-08 15:06:13.140144 | orchestrator | 2025-10-08 15:06:13.140161 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-10-08 15:06:14.936922 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2025-10-08 15:06:14.937013 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2025-10-08 15:06:14.937028 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2025-10-08 15:06:14.937040 | orchestrator | 2025-10-08 15:06:14.937053 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2025-10-08 15:06:14.996213 | orchestrator | skipping: [testbed-manager] 2025-10-08 15:06:14.996267 | orchestrator | 2025-10-08 15:06:14.996276 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-10-08 15:06:15.596337 | orchestrator | changed: [testbed-manager] 2025-10-08 15:06:15.596419 | orchestrator | 2025-10-08 15:06:15.596436 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-10-08 15:06:15.673862 | orchestrator | skipping: [testbed-manager] 2025-10-08 15:06:15.673913 | orchestrator | 2025-10-08 15:06:15.673919 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-10-08 15:06:16.544383 | orchestrator | changed: [testbed-manager] => (item=None) 2025-10-08 15:06:16.544468 | orchestrator | changed: [testbed-manager] 2025-10-08 15:06:16.544484 | orchestrator | 2025-10-08 15:06:16.544498 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-10-08 15:06:16.577389 | orchestrator | skipping: [testbed-manager] 2025-10-08 15:06:16.577455 | orchestrator | 2025-10-08 15:06:16.577469 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-10-08 15:06:16.614249 | orchestrator | skipping: [testbed-manager] 2025-10-08 15:06:16.614311 | orchestrator | 2025-10-08 15:06:16.614326 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-10-08 15:06:16.648090 | orchestrator | skipping: [testbed-manager] 2025-10-08 15:06:16.648207 | orchestrator | 2025-10-08 15:06:16.648234 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-10-08 15:06:16.713718 | orchestrator | skipping: [testbed-manager] 2025-10-08 15:06:16.713822 | orchestrator | 2025-10-08 15:06:16.713840 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-10-08 15:06:17.450101 | orchestrator | ok: [testbed-manager] 2025-10-08 15:06:17.450181 | orchestrator | 2025-10-08 15:06:17.450197 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-10-08 15:06:17.450209 | orchestrator | 2025-10-08 15:06:17.450220 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-10-08 15:06:18.971228 | orchestrator | ok: [testbed-manager] 2025-10-08 15:06:18.972086 | orchestrator | 2025-10-08 15:06:18.972108 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2025-10-08 15:06:19.984191 | orchestrator | changed: [testbed-manager] 2025-10-08 15:06:19.984270 | orchestrator | 2025-10-08 15:06:19.984284 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-08 15:06:19.984297 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=13 rescued=0 ignored=0 2025-10-08 15:06:19.984309 | orchestrator | 2025-10-08 15:06:20.295746 | orchestrator | ok: Runtime: 0:08:50.796753 2025-10-08 15:06:20.311695 | 2025-10-08 15:06:20.311832 | TASK [Point out that the log in on the manager is now possible] 2025-10-08 15:06:20.352869 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2025-10-08 15:06:20.362865 | 2025-10-08 15:06:20.363008 | TASK [Point out that the following task takes some time and does not give any output] 2025-10-08 15:06:20.411566 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2025-10-08 15:06:20.421721 | 2025-10-08 15:06:20.421853 | TASK [Run manager part 1 + 2] 2025-10-08 15:06:21.365887 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-10-08 15:06:21.422255 | orchestrator | 2025-10-08 15:06:21.422334 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2025-10-08 15:06:21.422352 | orchestrator | 2025-10-08 15:06:21.422381 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-10-08 15:06:24.484513 | orchestrator | ok: [testbed-manager] 2025-10-08 15:06:24.484690 | orchestrator | 2025-10-08 15:06:24.484777 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-10-08 15:06:24.521363 | orchestrator | skipping: [testbed-manager] 2025-10-08 15:06:24.521418 | orchestrator | 2025-10-08 15:06:24.521429 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-10-08 15:06:24.563681 | orchestrator | ok: [testbed-manager] 2025-10-08 15:06:24.563723 | orchestrator | 2025-10-08 15:06:24.563741 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-10-08 15:06:24.604716 | orchestrator | ok: [testbed-manager] 2025-10-08 15:06:24.604772 | orchestrator | 2025-10-08 15:06:24.604780 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-10-08 15:06:24.672248 | orchestrator | ok: [testbed-manager] 2025-10-08 15:06:24.672287 | orchestrator | 2025-10-08 15:06:24.672293 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-10-08 15:06:24.726874 | orchestrator | ok: [testbed-manager] 2025-10-08 15:06:24.726928 | orchestrator | 2025-10-08 15:06:24.726940 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-10-08 15:06:24.771664 | orchestrator | included: /home/zuul-testbed05/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2025-10-08 15:06:24.771707 | orchestrator | 2025-10-08 15:06:24.771715 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-10-08 15:06:25.492364 | orchestrator | ok: [testbed-manager] 2025-10-08 15:06:25.492435 | orchestrator | 2025-10-08 15:06:25.492454 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-10-08 15:06:25.538527 | orchestrator | skipping: [testbed-manager] 2025-10-08 15:06:25.538563 | orchestrator | 2025-10-08 15:06:25.538570 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-10-08 15:06:26.928383 | orchestrator | changed: [testbed-manager] 2025-10-08 15:06:26.928456 | orchestrator | 2025-10-08 15:06:26.928473 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-10-08 15:06:27.532574 | orchestrator | ok: [testbed-manager] 2025-10-08 15:06:27.532645 | orchestrator | 2025-10-08 15:06:27.532661 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-10-08 15:06:28.721817 | orchestrator | changed: [testbed-manager] 2025-10-08 15:06:28.721890 | orchestrator | 2025-10-08 15:06:28.721907 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-10-08 15:06:46.367453 | orchestrator | changed: [testbed-manager] 2025-10-08 15:06:46.367519 | orchestrator | 2025-10-08 15:06:46.367535 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-10-08 15:06:47.031525 | orchestrator | ok: [testbed-manager] 2025-10-08 15:06:47.031608 | orchestrator | 2025-10-08 15:06:47.031625 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-10-08 15:06:47.087455 | orchestrator | skipping: [testbed-manager] 2025-10-08 15:06:47.087538 | orchestrator | 2025-10-08 15:06:47.087554 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2025-10-08 15:06:47.994203 | orchestrator | changed: [testbed-manager] 2025-10-08 15:06:47.994288 | orchestrator | 2025-10-08 15:06:47.994306 | orchestrator | TASK [Copy SSH private key] **************************************************** 2025-10-08 15:06:48.949164 | orchestrator | changed: [testbed-manager] 2025-10-08 15:06:48.949239 | orchestrator | 2025-10-08 15:06:48.949253 | orchestrator | TASK [Create configuration directory] ****************************************** 2025-10-08 15:06:49.519639 | orchestrator | changed: [testbed-manager] 2025-10-08 15:06:49.519722 | orchestrator | 2025-10-08 15:06:49.519765 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2025-10-08 15:06:49.560844 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-10-08 15:06:49.560934 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-10-08 15:06:49.560948 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-10-08 15:06:49.560961 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-10-08 15:06:51.670407 | orchestrator | changed: [testbed-manager] 2025-10-08 15:06:51.670496 | orchestrator | 2025-10-08 15:06:51.670513 | orchestrator | TASK [Install python requirements in venv] ************************************* 2025-10-08 15:07:01.396139 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2025-10-08 15:07:01.396189 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2025-10-08 15:07:01.396200 | orchestrator | ok: [testbed-manager] => (item=packaging) 2025-10-08 15:07:01.396208 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2025-10-08 15:07:01.396218 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2025-10-08 15:07:01.396226 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2025-10-08 15:07:01.396233 | orchestrator | 2025-10-08 15:07:01.396241 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2025-10-08 15:07:02.463962 | orchestrator | changed: [testbed-manager] 2025-10-08 15:07:02.464000 | orchestrator | 2025-10-08 15:07:02.464008 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2025-10-08 15:07:02.506097 | orchestrator | skipping: [testbed-manager] 2025-10-08 15:07:02.506177 | orchestrator | 2025-10-08 15:07:02.506194 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2025-10-08 15:07:05.627286 | orchestrator | changed: [testbed-manager] 2025-10-08 15:07:05.627372 | orchestrator | 2025-10-08 15:07:05.627388 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2025-10-08 15:07:05.668189 | orchestrator | skipping: [testbed-manager] 2025-10-08 15:07:05.668251 | orchestrator | 2025-10-08 15:07:05.668265 | orchestrator | TASK [Run manager part 2] ****************************************************** 2025-10-08 15:08:45.355608 | orchestrator | changed: [testbed-manager] 2025-10-08 15:08:45.355651 | orchestrator | 2025-10-08 15:08:45.355658 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-10-08 15:08:46.716932 | orchestrator | ok: [testbed-manager] 2025-10-08 15:08:46.716970 | orchestrator | 2025-10-08 15:08:46.716978 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-08 15:08:46.716985 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2025-10-08 15:08:46.716991 | orchestrator | 2025-10-08 15:08:47.050989 | orchestrator | ok: Runtime: 0:02:26.078815 2025-10-08 15:08:47.066697 | 2025-10-08 15:08:47.066859 | TASK [Reboot manager] 2025-10-08 15:08:48.602995 | orchestrator | ok: Runtime: 0:00:01.006272 2025-10-08 15:08:48.618539 | 2025-10-08 15:08:48.618656 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-10-08 15:09:05.015016 | orchestrator | ok 2025-10-08 15:09:05.024869 | 2025-10-08 15:09:05.024985 | TASK [Wait a little longer for the manager so that everything is ready] 2025-10-08 15:10:05.078320 | orchestrator | ok 2025-10-08 15:10:05.093168 | 2025-10-08 15:10:05.093323 | TASK [Deploy manager + bootstrap nodes] 2025-10-08 15:10:07.743211 | orchestrator | 2025-10-08 15:10:07.743396 | orchestrator | # DEPLOY MANAGER 2025-10-08 15:10:07.743420 | orchestrator | 2025-10-08 15:10:07.743435 | orchestrator | + set -e 2025-10-08 15:10:07.743448 | orchestrator | + echo 2025-10-08 15:10:07.743461 | orchestrator | + echo '# DEPLOY MANAGER' 2025-10-08 15:10:07.743478 | orchestrator | + echo 2025-10-08 15:10:07.743525 | orchestrator | + cat /opt/manager-vars.sh 2025-10-08 15:10:07.746395 | orchestrator | export NUMBER_OF_NODES=6 2025-10-08 15:10:07.746422 | orchestrator | 2025-10-08 15:10:07.746435 | orchestrator | export CEPH_VERSION=reef 2025-10-08 15:10:07.746447 | orchestrator | export CONFIGURATION_VERSION=main 2025-10-08 15:10:07.746459 | orchestrator | export MANAGER_VERSION=latest 2025-10-08 15:10:07.746481 | orchestrator | export OPENSTACK_VERSION=2024.2 2025-10-08 15:10:07.746493 | orchestrator | 2025-10-08 15:10:07.746510 | orchestrator | export ARA=false 2025-10-08 15:10:07.746522 | orchestrator | export DEPLOY_MODE=manager 2025-10-08 15:10:07.746539 | orchestrator | export TEMPEST=false 2025-10-08 15:10:07.746551 | orchestrator | export IS_ZUUL=true 2025-10-08 15:10:07.746562 | orchestrator | 2025-10-08 15:10:07.746580 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.175 2025-10-08 15:10:07.746591 | orchestrator | export EXTERNAL_API=false 2025-10-08 15:10:07.746602 | orchestrator | 2025-10-08 15:10:07.746613 | orchestrator | export IMAGE_USER=ubuntu 2025-10-08 15:10:07.746627 | orchestrator | export IMAGE_NODE_USER=ubuntu 2025-10-08 15:10:07.746638 | orchestrator | 2025-10-08 15:10:07.746649 | orchestrator | export CEPH_STACK=ceph-ansible 2025-10-08 15:10:07.746665 | orchestrator | 2025-10-08 15:10:07.746676 | orchestrator | + echo 2025-10-08 15:10:07.746688 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-10-08 15:10:07.747427 | orchestrator | ++ export INTERACTIVE=false 2025-10-08 15:10:07.747446 | orchestrator | ++ INTERACTIVE=false 2025-10-08 15:10:07.747459 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-10-08 15:10:07.747473 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-10-08 15:10:07.747642 | orchestrator | + source /opt/manager-vars.sh 2025-10-08 15:10:07.747658 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-10-08 15:10:07.747671 | orchestrator | ++ NUMBER_OF_NODES=6 2025-10-08 15:10:07.747686 | orchestrator | ++ export CEPH_VERSION=reef 2025-10-08 15:10:07.747697 | orchestrator | ++ CEPH_VERSION=reef 2025-10-08 15:10:07.747708 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-10-08 15:10:07.747719 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-10-08 15:10:07.747730 | orchestrator | ++ export MANAGER_VERSION=latest 2025-10-08 15:10:07.747741 | orchestrator | ++ MANAGER_VERSION=latest 2025-10-08 15:10:07.747752 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-10-08 15:10:07.747771 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-10-08 15:10:07.747952 | orchestrator | ++ export ARA=false 2025-10-08 15:10:07.747968 | orchestrator | ++ ARA=false 2025-10-08 15:10:07.747979 | orchestrator | ++ export DEPLOY_MODE=manager 2025-10-08 15:10:07.747990 | orchestrator | ++ DEPLOY_MODE=manager 2025-10-08 15:10:07.748001 | orchestrator | ++ export TEMPEST=false 2025-10-08 15:10:07.748012 | orchestrator | ++ TEMPEST=false 2025-10-08 15:10:07.748023 | orchestrator | ++ export IS_ZUUL=true 2025-10-08 15:10:07.748033 | orchestrator | ++ IS_ZUUL=true 2025-10-08 15:10:07.748048 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.175 2025-10-08 15:10:07.748060 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.175 2025-10-08 15:10:07.748071 | orchestrator | ++ export EXTERNAL_API=false 2025-10-08 15:10:07.748082 | orchestrator | ++ EXTERNAL_API=false 2025-10-08 15:10:07.748092 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-10-08 15:10:07.748103 | orchestrator | ++ IMAGE_USER=ubuntu 2025-10-08 15:10:07.748114 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-10-08 15:10:07.748124 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-10-08 15:10:07.748135 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-10-08 15:10:07.748146 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-10-08 15:10:07.748157 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2025-10-08 15:10:07.806513 | orchestrator | + docker version 2025-10-08 15:10:08.112923 | orchestrator | Client: Docker Engine - Community 2025-10-08 15:10:08.112979 | orchestrator | Version: 27.5.1 2025-10-08 15:10:08.112992 | orchestrator | API version: 1.47 2025-10-08 15:10:08.113003 | orchestrator | Go version: go1.22.11 2025-10-08 15:10:08.113013 | orchestrator | Git commit: 9f9e405 2025-10-08 15:10:08.113024 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-10-08 15:10:08.113035 | orchestrator | OS/Arch: linux/amd64 2025-10-08 15:10:08.113046 | orchestrator | Context: default 2025-10-08 15:10:08.113057 | orchestrator | 2025-10-08 15:10:08.113068 | orchestrator | Server: Docker Engine - Community 2025-10-08 15:10:08.113079 | orchestrator | Engine: 2025-10-08 15:10:08.113090 | orchestrator | Version: 27.5.1 2025-10-08 15:10:08.113101 | orchestrator | API version: 1.47 (minimum version 1.24) 2025-10-08 15:10:08.113135 | orchestrator | Go version: go1.22.11 2025-10-08 15:10:08.113146 | orchestrator | Git commit: 4c9b3b0 2025-10-08 15:10:08.113157 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-10-08 15:10:08.113168 | orchestrator | OS/Arch: linux/amd64 2025-10-08 15:10:08.113178 | orchestrator | Experimental: false 2025-10-08 15:10:08.113189 | orchestrator | containerd: 2025-10-08 15:10:08.113200 | orchestrator | Version: v1.7.28 2025-10-08 15:10:08.113211 | orchestrator | GitCommit: b98a3aace656320842a23f4a392a33f46af97866 2025-10-08 15:10:08.113222 | orchestrator | runc: 2025-10-08 15:10:08.113233 | orchestrator | Version: 1.3.0 2025-10-08 15:10:08.113243 | orchestrator | GitCommit: v1.3.0-0-g4ca628d1 2025-10-08 15:10:08.113254 | orchestrator | docker-init: 2025-10-08 15:10:08.113265 | orchestrator | Version: 0.19.0 2025-10-08 15:10:08.113277 | orchestrator | GitCommit: de40ad0 2025-10-08 15:10:08.116042 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2025-10-08 15:10:08.127270 | orchestrator | + set -e 2025-10-08 15:10:08.127290 | orchestrator | + source /opt/manager-vars.sh 2025-10-08 15:10:08.127302 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-10-08 15:10:08.127313 | orchestrator | ++ NUMBER_OF_NODES=6 2025-10-08 15:10:08.127323 | orchestrator | ++ export CEPH_VERSION=reef 2025-10-08 15:10:08.127334 | orchestrator | ++ CEPH_VERSION=reef 2025-10-08 15:10:08.127344 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-10-08 15:10:08.127355 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-10-08 15:10:08.127365 | orchestrator | ++ export MANAGER_VERSION=latest 2025-10-08 15:10:08.127376 | orchestrator | ++ MANAGER_VERSION=latest 2025-10-08 15:10:08.127387 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-10-08 15:10:08.127397 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-10-08 15:10:08.127408 | orchestrator | ++ export ARA=false 2025-10-08 15:10:08.127419 | orchestrator | ++ ARA=false 2025-10-08 15:10:08.127430 | orchestrator | ++ export DEPLOY_MODE=manager 2025-10-08 15:10:08.127440 | orchestrator | ++ DEPLOY_MODE=manager 2025-10-08 15:10:08.127451 | orchestrator | ++ export TEMPEST=false 2025-10-08 15:10:08.127461 | orchestrator | ++ TEMPEST=false 2025-10-08 15:10:08.127472 | orchestrator | ++ export IS_ZUUL=true 2025-10-08 15:10:08.127482 | orchestrator | ++ IS_ZUUL=true 2025-10-08 15:10:08.127493 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.175 2025-10-08 15:10:08.127504 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.175 2025-10-08 15:10:08.127514 | orchestrator | ++ export EXTERNAL_API=false 2025-10-08 15:10:08.127525 | orchestrator | ++ EXTERNAL_API=false 2025-10-08 15:10:08.127535 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-10-08 15:10:08.127546 | orchestrator | ++ IMAGE_USER=ubuntu 2025-10-08 15:10:08.127557 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-10-08 15:10:08.127568 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-10-08 15:10:08.127579 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-10-08 15:10:08.127589 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-10-08 15:10:08.127600 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-10-08 15:10:08.127610 | orchestrator | ++ export INTERACTIVE=false 2025-10-08 15:10:08.127621 | orchestrator | ++ INTERACTIVE=false 2025-10-08 15:10:08.127632 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-10-08 15:10:08.127645 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-10-08 15:10:08.127659 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-10-08 15:10:08.127670 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-10-08 15:10:08.127681 | orchestrator | + /opt/configuration/scripts/set-ceph-version.sh reef 2025-10-08 15:10:08.135299 | orchestrator | + set -e 2025-10-08 15:10:08.135333 | orchestrator | + VERSION=reef 2025-10-08 15:10:08.136735 | orchestrator | ++ grep '^ceph_version:' /opt/configuration/environments/manager/configuration.yml 2025-10-08 15:10:08.144321 | orchestrator | + [[ -n ceph_version: reef ]] 2025-10-08 15:10:08.144353 | orchestrator | + sed -i 's/ceph_version: .*/ceph_version: reef/g' /opt/configuration/environments/manager/configuration.yml 2025-10-08 15:10:08.151632 | orchestrator | + /opt/configuration/scripts/set-openstack-version.sh 2024.2 2025-10-08 15:10:08.157521 | orchestrator | + set -e 2025-10-08 15:10:08.157564 | orchestrator | + VERSION=2024.2 2025-10-08 15:10:08.158183 | orchestrator | ++ grep '^openstack_version:' /opt/configuration/environments/manager/configuration.yml 2025-10-08 15:10:08.160347 | orchestrator | + [[ -n openstack_version: 2024.2 ]] 2025-10-08 15:10:08.160422 | orchestrator | + sed -i 's/openstack_version: .*/openstack_version: 2024.2/g' /opt/configuration/environments/manager/configuration.yml 2025-10-08 15:10:08.166013 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2025-10-08 15:10:08.166885 | orchestrator | ++ semver latest 7.0.0 2025-10-08 15:10:08.227627 | orchestrator | + [[ -1 -ge 0 ]] 2025-10-08 15:10:08.227680 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-10-08 15:10:08.227692 | orchestrator | + echo 'enable_osism_kubernetes: true' 2025-10-08 15:10:08.227704 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2025-10-08 15:10:08.319604 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-10-08 15:10:08.320936 | orchestrator | + source /opt/venv/bin/activate 2025-10-08 15:10:08.322076 | orchestrator | ++ deactivate nondestructive 2025-10-08 15:10:08.322111 | orchestrator | ++ '[' -n '' ']' 2025-10-08 15:10:08.322134 | orchestrator | ++ '[' -n '' ']' 2025-10-08 15:10:08.322354 | orchestrator | ++ hash -r 2025-10-08 15:10:08.322369 | orchestrator | ++ '[' -n '' ']' 2025-10-08 15:10:08.322380 | orchestrator | ++ unset VIRTUAL_ENV 2025-10-08 15:10:08.322391 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-10-08 15:10:08.322403 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-10-08 15:10:08.322414 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-10-08 15:10:08.322425 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-10-08 15:10:08.322437 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-10-08 15:10:08.322498 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-10-08 15:10:08.322512 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-10-08 15:10:08.322523 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-10-08 15:10:08.322535 | orchestrator | ++ export PATH 2025-10-08 15:10:08.322546 | orchestrator | ++ '[' -n '' ']' 2025-10-08 15:10:08.322561 | orchestrator | ++ '[' -z '' ']' 2025-10-08 15:10:08.322572 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-10-08 15:10:08.322582 | orchestrator | ++ PS1='(venv) ' 2025-10-08 15:10:08.322593 | orchestrator | ++ export PS1 2025-10-08 15:10:08.322604 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-10-08 15:10:08.322615 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-10-08 15:10:08.322626 | orchestrator | ++ hash -r 2025-10-08 15:10:08.322897 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2025-10-08 15:10:09.700452 | orchestrator | 2025-10-08 15:10:09.700530 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2025-10-08 15:10:09.700544 | orchestrator | 2025-10-08 15:10:09.700555 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-10-08 15:10:10.289083 | orchestrator | ok: [testbed-manager] 2025-10-08 15:10:10.289167 | orchestrator | 2025-10-08 15:10:10.289183 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-10-08 15:10:11.306277 | orchestrator | changed: [testbed-manager] 2025-10-08 15:10:11.306351 | orchestrator | 2025-10-08 15:10:11.306365 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2025-10-08 15:10:11.306377 | orchestrator | 2025-10-08 15:10:11.306388 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-10-08 15:10:13.792926 | orchestrator | ok: [testbed-manager] 2025-10-08 15:10:13.793021 | orchestrator | 2025-10-08 15:10:13.793036 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2025-10-08 15:10:13.850493 | orchestrator | ok: [testbed-manager] 2025-10-08 15:10:13.850515 | orchestrator | 2025-10-08 15:10:13.850530 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2025-10-08 15:10:14.326248 | orchestrator | changed: [testbed-manager] 2025-10-08 15:10:14.326339 | orchestrator | 2025-10-08 15:10:14.326353 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2025-10-08 15:10:14.370302 | orchestrator | skipping: [testbed-manager] 2025-10-08 15:10:14.370328 | orchestrator | 2025-10-08 15:10:14.370340 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-10-08 15:10:14.727676 | orchestrator | changed: [testbed-manager] 2025-10-08 15:10:14.727772 | orchestrator | 2025-10-08 15:10:14.727787 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2025-10-08 15:10:14.783658 | orchestrator | skipping: [testbed-manager] 2025-10-08 15:10:14.783691 | orchestrator | 2025-10-08 15:10:14.783703 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2025-10-08 15:10:15.146130 | orchestrator | ok: [testbed-manager] 2025-10-08 15:10:15.146226 | orchestrator | 2025-10-08 15:10:15.146243 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2025-10-08 15:10:15.287620 | orchestrator | skipping: [testbed-manager] 2025-10-08 15:10:15.287698 | orchestrator | 2025-10-08 15:10:15.287711 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2025-10-08 15:10:15.287723 | orchestrator | 2025-10-08 15:10:15.287737 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-10-08 15:10:17.048537 | orchestrator | ok: [testbed-manager] 2025-10-08 15:10:17.048630 | orchestrator | 2025-10-08 15:10:17.048644 | orchestrator | TASK [Apply traefik role] ****************************************************** 2025-10-08 15:10:17.165160 | orchestrator | included: osism.services.traefik for testbed-manager 2025-10-08 15:10:17.165219 | orchestrator | 2025-10-08 15:10:17.165231 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2025-10-08 15:10:17.224560 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2025-10-08 15:10:17.224627 | orchestrator | 2025-10-08 15:10:17.224643 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2025-10-08 15:10:18.346277 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2025-10-08 15:10:18.346373 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2025-10-08 15:10:18.346387 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2025-10-08 15:10:18.346399 | orchestrator | 2025-10-08 15:10:18.346411 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2025-10-08 15:10:20.244335 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2025-10-08 15:10:20.244458 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2025-10-08 15:10:20.244475 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2025-10-08 15:10:20.244501 | orchestrator | 2025-10-08 15:10:20.244514 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2025-10-08 15:10:20.914156 | orchestrator | changed: [testbed-manager] => (item=None) 2025-10-08 15:10:20.914243 | orchestrator | changed: [testbed-manager] 2025-10-08 15:10:20.914258 | orchestrator | 2025-10-08 15:10:20.914270 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2025-10-08 15:10:21.570419 | orchestrator | changed: [testbed-manager] => (item=None) 2025-10-08 15:10:21.570512 | orchestrator | changed: [testbed-manager] 2025-10-08 15:10:21.570525 | orchestrator | 2025-10-08 15:10:21.570537 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2025-10-08 15:10:21.620925 | orchestrator | skipping: [testbed-manager] 2025-10-08 15:10:21.620961 | orchestrator | 2025-10-08 15:10:21.620978 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2025-10-08 15:10:21.949092 | orchestrator | ok: [testbed-manager] 2025-10-08 15:10:21.949173 | orchestrator | 2025-10-08 15:10:21.949186 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2025-10-08 15:10:22.009852 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2025-10-08 15:10:22.009888 | orchestrator | 2025-10-08 15:10:22.009901 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2025-10-08 15:10:22.924899 | orchestrator | changed: [testbed-manager] 2025-10-08 15:10:22.924985 | orchestrator | 2025-10-08 15:10:22.924998 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2025-10-08 15:10:23.674425 | orchestrator | changed: [testbed-manager] 2025-10-08 15:10:23.674525 | orchestrator | 2025-10-08 15:10:23.674541 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2025-10-08 15:10:34.725302 | orchestrator | changed: [testbed-manager] 2025-10-08 15:10:34.725414 | orchestrator | 2025-10-08 15:10:34.725431 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2025-10-08 15:10:34.789764 | orchestrator | skipping: [testbed-manager] 2025-10-08 15:10:34.789875 | orchestrator | 2025-10-08 15:10:34.789893 | orchestrator | PLAY [Deploy manager service] ************************************************** 2025-10-08 15:10:34.789906 | orchestrator | 2025-10-08 15:10:34.789918 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-10-08 15:10:36.633298 | orchestrator | ok: [testbed-manager] 2025-10-08 15:10:36.633404 | orchestrator | 2025-10-08 15:10:36.633452 | orchestrator | TASK [Apply manager role] ****************************************************** 2025-10-08 15:10:36.771901 | orchestrator | included: osism.services.manager for testbed-manager 2025-10-08 15:10:36.771973 | orchestrator | 2025-10-08 15:10:36.771986 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2025-10-08 15:10:36.836233 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2025-10-08 15:10:36.836262 | orchestrator | 2025-10-08 15:10:36.836274 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2025-10-08 15:10:39.560018 | orchestrator | ok: [testbed-manager] 2025-10-08 15:10:39.560115 | orchestrator | 2025-10-08 15:10:39.560130 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2025-10-08 15:10:39.613968 | orchestrator | ok: [testbed-manager] 2025-10-08 15:10:39.614134 | orchestrator | 2025-10-08 15:10:39.614167 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2025-10-08 15:10:39.752965 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2025-10-08 15:10:39.753027 | orchestrator | 2025-10-08 15:10:39.753037 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2025-10-08 15:10:42.698222 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2025-10-08 15:10:42.698327 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2025-10-08 15:10:42.698342 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2025-10-08 15:10:42.698355 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2025-10-08 15:10:42.698366 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2025-10-08 15:10:42.698378 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2025-10-08 15:10:42.698389 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2025-10-08 15:10:42.698400 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2025-10-08 15:10:42.698412 | orchestrator | 2025-10-08 15:10:42.698424 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2025-10-08 15:10:43.361256 | orchestrator | changed: [testbed-manager] 2025-10-08 15:10:43.361351 | orchestrator | 2025-10-08 15:10:43.361366 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2025-10-08 15:10:43.990222 | orchestrator | changed: [testbed-manager] 2025-10-08 15:10:43.990318 | orchestrator | 2025-10-08 15:10:43.990332 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2025-10-08 15:10:44.066986 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2025-10-08 15:10:44.067078 | orchestrator | 2025-10-08 15:10:44.067093 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2025-10-08 15:10:45.330561 | orchestrator | changed: [testbed-manager] => (item=ara) 2025-10-08 15:10:45.330654 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2025-10-08 15:10:45.330668 | orchestrator | 2025-10-08 15:10:45.330681 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2025-10-08 15:10:45.998320 | orchestrator | changed: [testbed-manager] 2025-10-08 15:10:45.998412 | orchestrator | 2025-10-08 15:10:45.998428 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2025-10-08 15:10:46.058767 | orchestrator | skipping: [testbed-manager] 2025-10-08 15:10:46.058819 | orchestrator | 2025-10-08 15:10:46.058883 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2025-10-08 15:10:46.135541 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2025-10-08 15:10:46.135598 | orchestrator | 2025-10-08 15:10:46.135613 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2025-10-08 15:10:46.819826 | orchestrator | changed: [testbed-manager] 2025-10-08 15:10:46.819967 | orchestrator | 2025-10-08 15:10:46.819982 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2025-10-08 15:10:46.902592 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2025-10-08 15:10:46.902709 | orchestrator | 2025-10-08 15:10:46.902725 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2025-10-08 15:10:48.307607 | orchestrator | changed: [testbed-manager] => (item=None) 2025-10-08 15:10:48.308461 | orchestrator | changed: [testbed-manager] => (item=None) 2025-10-08 15:10:48.308490 | orchestrator | changed: [testbed-manager] 2025-10-08 15:10:48.308504 | orchestrator | 2025-10-08 15:10:48.308515 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2025-10-08 15:10:48.965465 | orchestrator | changed: [testbed-manager] 2025-10-08 15:10:48.965552 | orchestrator | 2025-10-08 15:10:48.965565 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2025-10-08 15:10:49.026472 | orchestrator | skipping: [testbed-manager] 2025-10-08 15:10:49.026555 | orchestrator | 2025-10-08 15:10:49.026566 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2025-10-08 15:10:49.115865 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2025-10-08 15:10:49.115949 | orchestrator | 2025-10-08 15:10:49.115961 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2025-10-08 15:10:49.681202 | orchestrator | changed: [testbed-manager] 2025-10-08 15:10:49.681295 | orchestrator | 2025-10-08 15:10:49.681310 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2025-10-08 15:10:50.117624 | orchestrator | changed: [testbed-manager] 2025-10-08 15:10:50.117712 | orchestrator | 2025-10-08 15:10:50.117726 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2025-10-08 15:10:51.424573 | orchestrator | changed: [testbed-manager] => (item=conductor) 2025-10-08 15:10:51.424672 | orchestrator | changed: [testbed-manager] => (item=openstack) 2025-10-08 15:10:51.424687 | orchestrator | 2025-10-08 15:10:51.424700 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2025-10-08 15:10:52.080340 | orchestrator | changed: [testbed-manager] 2025-10-08 15:10:52.080464 | orchestrator | 2025-10-08 15:10:52.080492 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2025-10-08 15:10:52.518906 | orchestrator | ok: [testbed-manager] 2025-10-08 15:10:52.519007 | orchestrator | 2025-10-08 15:10:52.519022 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2025-10-08 15:10:52.895181 | orchestrator | changed: [testbed-manager] 2025-10-08 15:10:52.895246 | orchestrator | 2025-10-08 15:10:52.895259 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2025-10-08 15:10:52.947444 | orchestrator | skipping: [testbed-manager] 2025-10-08 15:10:52.947469 | orchestrator | 2025-10-08 15:10:52.947480 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2025-10-08 15:10:53.039696 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2025-10-08 15:10:53.039721 | orchestrator | 2025-10-08 15:10:53.039734 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2025-10-08 15:10:53.096085 | orchestrator | ok: [testbed-manager] 2025-10-08 15:10:53.096108 | orchestrator | 2025-10-08 15:10:53.096119 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2025-10-08 15:10:55.167767 | orchestrator | changed: [testbed-manager] => (item=osism) 2025-10-08 15:10:55.167916 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2025-10-08 15:10:55.167931 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2025-10-08 15:10:55.167942 | orchestrator | 2025-10-08 15:10:55.167953 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2025-10-08 15:10:55.906487 | orchestrator | changed: [testbed-manager] 2025-10-08 15:10:55.906587 | orchestrator | 2025-10-08 15:10:55.906603 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2025-10-08 15:10:56.650376 | orchestrator | changed: [testbed-manager] 2025-10-08 15:10:56.650443 | orchestrator | 2025-10-08 15:10:56.650455 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2025-10-08 15:10:57.468779 | orchestrator | changed: [testbed-manager] 2025-10-08 15:10:57.468893 | orchestrator | 2025-10-08 15:10:57.468907 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2025-10-08 15:10:57.565123 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2025-10-08 15:10:57.565160 | orchestrator | 2025-10-08 15:10:57.565173 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2025-10-08 15:10:57.614192 | orchestrator | ok: [testbed-manager] 2025-10-08 15:10:57.614214 | orchestrator | 2025-10-08 15:10:57.614226 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2025-10-08 15:10:58.354383 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2025-10-08 15:10:58.354461 | orchestrator | 2025-10-08 15:10:58.354476 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2025-10-08 15:10:58.439691 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2025-10-08 15:10:58.439719 | orchestrator | 2025-10-08 15:10:58.439732 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2025-10-08 15:10:59.174962 | orchestrator | changed: [testbed-manager] 2025-10-08 15:10:59.175048 | orchestrator | 2025-10-08 15:10:59.175062 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2025-10-08 15:10:59.774550 | orchestrator | ok: [testbed-manager] 2025-10-08 15:10:59.774641 | orchestrator | 2025-10-08 15:10:59.774655 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2025-10-08 15:10:59.830996 | orchestrator | skipping: [testbed-manager] 2025-10-08 15:10:59.831058 | orchestrator | 2025-10-08 15:10:59.831073 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2025-10-08 15:10:59.880471 | orchestrator | ok: [testbed-manager] 2025-10-08 15:10:59.880548 | orchestrator | 2025-10-08 15:10:59.880572 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2025-10-08 15:11:00.726467 | orchestrator | changed: [testbed-manager] 2025-10-08 15:11:00.726557 | orchestrator | 2025-10-08 15:11:00.726572 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2025-10-08 15:12:09.838456 | orchestrator | changed: [testbed-manager] 2025-10-08 15:12:09.838575 | orchestrator | 2025-10-08 15:12:09.838593 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2025-10-08 15:12:10.765813 | orchestrator | ok: [testbed-manager] 2025-10-08 15:12:10.765945 | orchestrator | 2025-10-08 15:12:10.765963 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2025-10-08 15:12:10.853584 | orchestrator | skipping: [testbed-manager] 2025-10-08 15:12:10.853642 | orchestrator | 2025-10-08 15:12:10.853658 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2025-10-08 15:12:13.182414 | orchestrator | changed: [testbed-manager] 2025-10-08 15:12:13.182520 | orchestrator | 2025-10-08 15:12:13.182537 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2025-10-08 15:12:13.227561 | orchestrator | ok: [testbed-manager] 2025-10-08 15:12:13.227653 | orchestrator | 2025-10-08 15:12:13.227669 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-10-08 15:12:13.227682 | orchestrator | 2025-10-08 15:12:13.228387 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2025-10-08 15:12:13.270657 | orchestrator | skipping: [testbed-manager] 2025-10-08 15:12:13.270713 | orchestrator | 2025-10-08 15:12:13.270724 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2025-10-08 15:13:13.322447 | orchestrator | Pausing for 60 seconds 2025-10-08 15:13:13.322498 | orchestrator | changed: [testbed-manager] 2025-10-08 15:13:13.322511 | orchestrator | 2025-10-08 15:13:13.322523 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2025-10-08 15:13:18.007555 | orchestrator | changed: [testbed-manager] 2025-10-08 15:13:18.007669 | orchestrator | 2025-10-08 15:13:18.007686 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2025-10-08 15:14:20.296555 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2025-10-08 15:14:20.296649 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2025-10-08 15:14:20.296663 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (48 retries left). 2025-10-08 15:14:20.296704 | orchestrator | changed: [testbed-manager] 2025-10-08 15:14:20.296718 | orchestrator | 2025-10-08 15:14:20.296731 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2025-10-08 15:14:30.862376 | orchestrator | changed: [testbed-manager] 2025-10-08 15:14:30.862491 | orchestrator | 2025-10-08 15:14:30.862509 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2025-10-08 15:14:30.943305 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2025-10-08 15:14:30.943394 | orchestrator | 2025-10-08 15:14:30.943409 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-10-08 15:14:30.943421 | orchestrator | 2025-10-08 15:14:30.943433 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2025-10-08 15:14:30.992289 | orchestrator | skipping: [testbed-manager] 2025-10-08 15:14:30.992361 | orchestrator | 2025-10-08 15:14:30.992375 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2025-10-08 15:14:31.080636 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2025-10-08 15:14:31.080712 | orchestrator | 2025-10-08 15:14:31.080727 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2025-10-08 15:14:31.875709 | orchestrator | changed: [testbed-manager] 2025-10-08 15:14:31.875756 | orchestrator | 2025-10-08 15:14:31.875767 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2025-10-08 15:14:35.831622 | orchestrator | ok: [testbed-manager] 2025-10-08 15:14:35.831720 | orchestrator | 2025-10-08 15:14:35.831758 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2025-10-08 15:14:35.898152 | orchestrator | ok: [testbed-manager] => { 2025-10-08 15:14:35.898235 | orchestrator | "version_check_result.stdout_lines": [ 2025-10-08 15:14:35.898247 | orchestrator | "=== OSISM Container Version Check ===", 2025-10-08 15:14:35.898256 | orchestrator | "Checking running containers against expected versions...", 2025-10-08 15:14:35.898265 | orchestrator | "", 2025-10-08 15:14:35.898274 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2025-10-08 15:14:35.898282 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:latest", 2025-10-08 15:14:35.898701 | orchestrator | " Enabled: true", 2025-10-08 15:14:35.898715 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:latest", 2025-10-08 15:14:35.898723 | orchestrator | " Status: ✅ MATCH", 2025-10-08 15:14:35.898731 | orchestrator | "", 2025-10-08 15:14:35.898739 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2025-10-08 15:14:35.898747 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:latest", 2025-10-08 15:14:35.898755 | orchestrator | " Enabled: true", 2025-10-08 15:14:35.898763 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:latest", 2025-10-08 15:14:35.898771 | orchestrator | " Status: ✅ MATCH", 2025-10-08 15:14:35.898779 | orchestrator | "", 2025-10-08 15:14:35.898787 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2025-10-08 15:14:35.898795 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:latest", 2025-10-08 15:14:35.898803 | orchestrator | " Enabled: true", 2025-10-08 15:14:35.898811 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:latest", 2025-10-08 15:14:35.898819 | orchestrator | " Status: ✅ MATCH", 2025-10-08 15:14:35.898827 | orchestrator | "", 2025-10-08 15:14:35.898835 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2025-10-08 15:14:35.898843 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:reef", 2025-10-08 15:14:35.898851 | orchestrator | " Enabled: true", 2025-10-08 15:14:35.898859 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:reef", 2025-10-08 15:14:35.898867 | orchestrator | " Status: ✅ MATCH", 2025-10-08 15:14:35.898874 | orchestrator | "", 2025-10-08 15:14:35.898882 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2025-10-08 15:14:35.898928 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:2024.2", 2025-10-08 15:14:35.898937 | orchestrator | " Enabled: true", 2025-10-08 15:14:35.898945 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:2024.2", 2025-10-08 15:14:35.898953 | orchestrator | " Status: ✅ MATCH", 2025-10-08 15:14:35.898961 | orchestrator | "", 2025-10-08 15:14:35.898968 | orchestrator | "Checking service: osismclient (OSISM Client)", 2025-10-08 15:14:35.898976 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2025-10-08 15:14:35.898984 | orchestrator | " Enabled: true", 2025-10-08 15:14:35.898992 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2025-10-08 15:14:35.899000 | orchestrator | " Status: ✅ MATCH", 2025-10-08 15:14:35.899007 | orchestrator | "", 2025-10-08 15:14:35.899015 | orchestrator | "Checking service: ara-server (ARA Server)", 2025-10-08 15:14:35.899023 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2025-10-08 15:14:35.899031 | orchestrator | " Enabled: true", 2025-10-08 15:14:35.899038 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2025-10-08 15:14:35.899046 | orchestrator | " Status: ✅ MATCH", 2025-10-08 15:14:35.899054 | orchestrator | "", 2025-10-08 15:14:35.899070 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2025-10-08 15:14:35.899078 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.3", 2025-10-08 15:14:35.899086 | orchestrator | " Enabled: true", 2025-10-08 15:14:35.899094 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.3", 2025-10-08 15:14:35.899101 | orchestrator | " Status: ✅ MATCH", 2025-10-08 15:14:35.899109 | orchestrator | "", 2025-10-08 15:14:35.899117 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2025-10-08 15:14:35.899125 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:latest", 2025-10-08 15:14:35.899136 | orchestrator | " Enabled: true", 2025-10-08 15:14:35.899144 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:latest", 2025-10-08 15:14:35.899152 | orchestrator | " Status: ✅ MATCH", 2025-10-08 15:14:35.899160 | orchestrator | "", 2025-10-08 15:14:35.899168 | orchestrator | "Checking service: redis (Redis Cache)", 2025-10-08 15:14:35.899176 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.5-alpine", 2025-10-08 15:14:35.899184 | orchestrator | " Enabled: true", 2025-10-08 15:14:35.899191 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.5-alpine", 2025-10-08 15:14:35.899199 | orchestrator | " Status: ✅ MATCH", 2025-10-08 15:14:35.899207 | orchestrator | "", 2025-10-08 15:14:35.899215 | orchestrator | "Checking service: api (OSISM API Service)", 2025-10-08 15:14:35.899222 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2025-10-08 15:14:35.899230 | orchestrator | " Enabled: true", 2025-10-08 15:14:35.899238 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2025-10-08 15:14:35.899245 | orchestrator | " Status: ✅ MATCH", 2025-10-08 15:14:35.899253 | orchestrator | "", 2025-10-08 15:14:35.899261 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2025-10-08 15:14:35.899269 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2025-10-08 15:14:35.899276 | orchestrator | " Enabled: true", 2025-10-08 15:14:35.899284 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2025-10-08 15:14:35.899292 | orchestrator | " Status: ✅ MATCH", 2025-10-08 15:14:35.899300 | orchestrator | "", 2025-10-08 15:14:35.899307 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2025-10-08 15:14:35.899315 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2025-10-08 15:14:35.899323 | orchestrator | " Enabled: true", 2025-10-08 15:14:35.899331 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2025-10-08 15:14:35.899338 | orchestrator | " Status: ✅ MATCH", 2025-10-08 15:14:35.899346 | orchestrator | "", 2025-10-08 15:14:35.899354 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2025-10-08 15:14:35.899362 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2025-10-08 15:14:35.899369 | orchestrator | " Enabled: true", 2025-10-08 15:14:35.899383 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2025-10-08 15:14:35.899391 | orchestrator | " Status: ✅ MATCH", 2025-10-08 15:14:35.899399 | orchestrator | "", 2025-10-08 15:14:35.899406 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2025-10-08 15:14:35.899426 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2025-10-08 15:14:35.899435 | orchestrator | " Enabled: true", 2025-10-08 15:14:35.899443 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2025-10-08 15:14:35.899451 | orchestrator | " Status: ✅ MATCH", 2025-10-08 15:14:35.899459 | orchestrator | "", 2025-10-08 15:14:35.899467 | orchestrator | "=== Summary ===", 2025-10-08 15:14:35.899474 | orchestrator | "Errors (version mismatches): 0", 2025-10-08 15:14:35.899482 | orchestrator | "Warnings (expected containers not running): 0", 2025-10-08 15:14:35.899490 | orchestrator | "", 2025-10-08 15:14:35.899497 | orchestrator | "✅ All running containers match expected versions!" 2025-10-08 15:14:35.899505 | orchestrator | ] 2025-10-08 15:14:35.899513 | orchestrator | } 2025-10-08 15:14:35.899521 | orchestrator | 2025-10-08 15:14:35.899530 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2025-10-08 15:14:35.953935 | orchestrator | skipping: [testbed-manager] 2025-10-08 15:14:35.953965 | orchestrator | 2025-10-08 15:14:35.953976 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-08 15:14:35.953985 | orchestrator | testbed-manager : ok=70 changed=37 unreachable=0 failed=0 skipped=13 rescued=0 ignored=0 2025-10-08 15:14:35.953993 | orchestrator | 2025-10-08 15:14:36.075341 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-10-08 15:14:36.075393 | orchestrator | + deactivate 2025-10-08 15:14:36.075404 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-10-08 15:14:36.075415 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-10-08 15:14:36.075424 | orchestrator | + export PATH 2025-10-08 15:14:36.075434 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-10-08 15:14:36.075443 | orchestrator | + '[' -n '' ']' 2025-10-08 15:14:36.075453 | orchestrator | + hash -r 2025-10-08 15:14:36.075462 | orchestrator | + '[' -n '' ']' 2025-10-08 15:14:36.075471 | orchestrator | + unset VIRTUAL_ENV 2025-10-08 15:14:36.075480 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-10-08 15:14:36.075489 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-10-08 15:14:36.075498 | orchestrator | + unset -f deactivate 2025-10-08 15:14:36.075507 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2025-10-08 15:14:36.080521 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-10-08 15:14:36.080538 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-10-08 15:14:36.080546 | orchestrator | + local max_attempts=60 2025-10-08 15:14:36.080555 | orchestrator | + local name=ceph-ansible 2025-10-08 15:14:36.080564 | orchestrator | + local attempt_num=1 2025-10-08 15:14:36.081271 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-10-08 15:14:36.115067 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-10-08 15:14:36.115102 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-10-08 15:14:36.115113 | orchestrator | + local max_attempts=60 2025-10-08 15:14:36.115124 | orchestrator | + local name=kolla-ansible 2025-10-08 15:14:36.115135 | orchestrator | + local attempt_num=1 2025-10-08 15:14:36.115493 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-10-08 15:14:36.146509 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-10-08 15:14:36.146532 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-10-08 15:14:36.146543 | orchestrator | + local max_attempts=60 2025-10-08 15:14:36.146553 | orchestrator | + local name=osism-ansible 2025-10-08 15:14:36.146564 | orchestrator | + local attempt_num=1 2025-10-08 15:14:36.147193 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-10-08 15:14:36.180392 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-10-08 15:14:36.180424 | orchestrator | + [[ true == \t\r\u\e ]] 2025-10-08 15:14:36.180435 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-10-08 15:14:36.927595 | orchestrator | + docker compose --project-directory /opt/manager ps 2025-10-08 15:14:37.159650 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-10-08 15:14:37.159751 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" ceph-ansible 2 minutes ago Up About a minute (healthy) 2025-10-08 15:14:37.159765 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" kolla-ansible 2 minutes ago Up About a minute (healthy) 2025-10-08 15:14:37.159776 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" api 2 minutes ago Up 2 minutes (healthy) 192.168.16.5:8000->8000/tcp 2025-10-08 15:14:37.159789 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server 2 minutes ago Up 2 minutes (healthy) 8000/tcp 2025-10-08 15:14:37.159801 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" beat 2 minutes ago Up 2 minutes (healthy) 2025-10-08 15:14:37.159828 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" flower 2 minutes ago Up 2 minutes (healthy) 2025-10-08 15:14:37.159839 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" inventory_reconciler 2 minutes ago Up About a minute (healthy) 2025-10-08 15:14:37.159850 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" listener 2 minutes ago Up 2 minutes (healthy) 2025-10-08 15:14:37.159860 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.3 "docker-entrypoint.s…" mariadb 2 minutes ago Up 2 minutes (healthy) 3306/tcp 2025-10-08 15:14:37.159871 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" openstack 2 minutes ago Up 2 minutes (healthy) 2025-10-08 15:14:37.159882 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.5-alpine "docker-entrypoint.s…" redis 2 minutes ago Up 2 minutes (healthy) 6379/tcp 2025-10-08 15:14:37.159939 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" osism-ansible 2 minutes ago Up About a minute (healthy) 2025-10-08 15:14:37.159953 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:latest "docker-entrypoint.s…" frontend 2 minutes ago Up 2 minutes 192.168.16.5:3000->3000/tcp 2025-10-08 15:14:37.159964 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" osism-kubernetes 2 minutes ago Up About a minute (healthy) 2025-10-08 15:14:37.159975 | orchestrator | osismclient registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" osismclient 2 minutes ago Up 2 minutes (healthy) 2025-10-08 15:14:37.166167 | orchestrator | ++ semver latest 7.0.0 2025-10-08 15:14:37.209778 | orchestrator | + [[ -1 -ge 0 ]] 2025-10-08 15:14:37.209826 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-10-08 15:14:37.209840 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2025-10-08 15:14:37.213558 | orchestrator | + osism apply resolvconf -l testbed-manager 2025-10-08 15:14:49.485598 | orchestrator | 2025-10-08 15:14:49 | INFO  | Task 2120c91f-8b9a-48b1-8658-ac1b27834ef5 (resolvconf) was prepared for execution. 2025-10-08 15:14:49.485704 | orchestrator | 2025-10-08 15:14:49 | INFO  | It takes a moment until task 2120c91f-8b9a-48b1-8658-ac1b27834ef5 (resolvconf) has been started and output is visible here. 2025-10-08 15:15:03.688790 | orchestrator | 2025-10-08 15:15:03.688895 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2025-10-08 15:15:03.688976 | orchestrator | 2025-10-08 15:15:03.688989 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-10-08 15:15:03.689000 | orchestrator | Wednesday 08 October 2025 15:14:53 +0000 (0:00:00.142) 0:00:00.142 ***** 2025-10-08 15:15:03.689012 | orchestrator | ok: [testbed-manager] 2025-10-08 15:15:03.689024 | orchestrator | 2025-10-08 15:15:03.689035 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-10-08 15:15:03.689046 | orchestrator | Wednesday 08 October 2025 15:14:57 +0000 (0:00:03.799) 0:00:03.941 ***** 2025-10-08 15:15:03.689057 | orchestrator | skipping: [testbed-manager] 2025-10-08 15:15:03.689069 | orchestrator | 2025-10-08 15:15:03.689079 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-10-08 15:15:03.689090 | orchestrator | Wednesday 08 October 2025 15:14:57 +0000 (0:00:00.064) 0:00:04.005 ***** 2025-10-08 15:15:03.689101 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2025-10-08 15:15:03.689113 | orchestrator | 2025-10-08 15:15:03.689134 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-10-08 15:15:03.689146 | orchestrator | Wednesday 08 October 2025 15:14:57 +0000 (0:00:00.068) 0:00:04.074 ***** 2025-10-08 15:15:03.689157 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2025-10-08 15:15:03.689168 | orchestrator | 2025-10-08 15:15:03.689179 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-10-08 15:15:03.689190 | orchestrator | Wednesday 08 October 2025 15:14:57 +0000 (0:00:00.078) 0:00:04.152 ***** 2025-10-08 15:15:03.689200 | orchestrator | ok: [testbed-manager] 2025-10-08 15:15:03.689211 | orchestrator | 2025-10-08 15:15:03.689222 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-10-08 15:15:03.689232 | orchestrator | Wednesday 08 October 2025 15:14:58 +0000 (0:00:01.173) 0:00:05.325 ***** 2025-10-08 15:15:03.689243 | orchestrator | skipping: [testbed-manager] 2025-10-08 15:15:03.689254 | orchestrator | 2025-10-08 15:15:03.689265 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-10-08 15:15:03.689276 | orchestrator | Wednesday 08 October 2025 15:14:58 +0000 (0:00:00.068) 0:00:05.394 ***** 2025-10-08 15:15:03.689286 | orchestrator | ok: [testbed-manager] 2025-10-08 15:15:03.689297 | orchestrator | 2025-10-08 15:15:03.689308 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-10-08 15:15:03.689321 | orchestrator | Wednesday 08 October 2025 15:14:59 +0000 (0:00:00.493) 0:00:05.887 ***** 2025-10-08 15:15:03.689333 | orchestrator | skipping: [testbed-manager] 2025-10-08 15:15:03.689346 | orchestrator | 2025-10-08 15:15:03.689358 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-10-08 15:15:03.689371 | orchestrator | Wednesday 08 October 2025 15:14:59 +0000 (0:00:00.084) 0:00:05.972 ***** 2025-10-08 15:15:03.689383 | orchestrator | changed: [testbed-manager] 2025-10-08 15:15:03.689395 | orchestrator | 2025-10-08 15:15:03.689407 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-10-08 15:15:03.689419 | orchestrator | Wednesday 08 October 2025 15:15:00 +0000 (0:00:00.535) 0:00:06.508 ***** 2025-10-08 15:15:03.689431 | orchestrator | changed: [testbed-manager] 2025-10-08 15:15:03.689444 | orchestrator | 2025-10-08 15:15:03.689456 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-10-08 15:15:03.689468 | orchestrator | Wednesday 08 October 2025 15:15:01 +0000 (0:00:01.107) 0:00:07.615 ***** 2025-10-08 15:15:03.689481 | orchestrator | ok: [testbed-manager] 2025-10-08 15:15:03.689493 | orchestrator | 2025-10-08 15:15:03.689506 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-10-08 15:15:03.689541 | orchestrator | Wednesday 08 October 2025 15:15:02 +0000 (0:00:01.070) 0:00:08.686 ***** 2025-10-08 15:15:03.689554 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2025-10-08 15:15:03.689567 | orchestrator | 2025-10-08 15:15:03.689578 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-10-08 15:15:03.689591 | orchestrator | Wednesday 08 October 2025 15:15:02 +0000 (0:00:00.078) 0:00:08.764 ***** 2025-10-08 15:15:03.689603 | orchestrator | changed: [testbed-manager] 2025-10-08 15:15:03.689615 | orchestrator | 2025-10-08 15:15:03.689627 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-08 15:15:03.689640 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-10-08 15:15:03.689652 | orchestrator | 2025-10-08 15:15:03.689665 | orchestrator | 2025-10-08 15:15:03.689677 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-08 15:15:03.689688 | orchestrator | Wednesday 08 October 2025 15:15:03 +0000 (0:00:01.159) 0:00:09.924 ***** 2025-10-08 15:15:03.689699 | orchestrator | =============================================================================== 2025-10-08 15:15:03.689710 | orchestrator | Gathering Facts --------------------------------------------------------- 3.80s 2025-10-08 15:15:03.689721 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.17s 2025-10-08 15:15:03.689731 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.16s 2025-10-08 15:15:03.689742 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.11s 2025-10-08 15:15:03.689753 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 1.07s 2025-10-08 15:15:03.689763 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.54s 2025-10-08 15:15:03.689791 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.49s 2025-10-08 15:15:03.689803 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.08s 2025-10-08 15:15:03.689813 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.08s 2025-10-08 15:15:03.689824 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.08s 2025-10-08 15:15:03.689841 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.07s 2025-10-08 15:15:03.689853 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.07s 2025-10-08 15:15:03.689864 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.06s 2025-10-08 15:15:03.998791 | orchestrator | + osism apply sshconfig 2025-10-08 15:15:16.148214 | orchestrator | 2025-10-08 15:15:16 | INFO  | Task c2bb97d6-1a45-45a6-a0b4-b8a50c00c278 (sshconfig) was prepared for execution. 2025-10-08 15:15:16.148329 | orchestrator | 2025-10-08 15:15:16 | INFO  | It takes a moment until task c2bb97d6-1a45-45a6-a0b4-b8a50c00c278 (sshconfig) has been started and output is visible here. 2025-10-08 15:15:28.165700 | orchestrator | 2025-10-08 15:15:28.165818 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2025-10-08 15:15:28.165835 | orchestrator | 2025-10-08 15:15:28.165848 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2025-10-08 15:15:28.165859 | orchestrator | Wednesday 08 October 2025 15:15:20 +0000 (0:00:00.162) 0:00:00.162 ***** 2025-10-08 15:15:28.165871 | orchestrator | ok: [testbed-manager] 2025-10-08 15:15:28.165882 | orchestrator | 2025-10-08 15:15:28.165893 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2025-10-08 15:15:28.165953 | orchestrator | Wednesday 08 October 2025 15:15:20 +0000 (0:00:00.555) 0:00:00.717 ***** 2025-10-08 15:15:28.165965 | orchestrator | changed: [testbed-manager] 2025-10-08 15:15:28.165976 | orchestrator | 2025-10-08 15:15:28.165987 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2025-10-08 15:15:28.166079 | orchestrator | Wednesday 08 October 2025 15:15:21 +0000 (0:00:00.518) 0:00:01.236 ***** 2025-10-08 15:15:28.166094 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2025-10-08 15:15:28.166104 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-10-08 15:15:28.166115 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2025-10-08 15:15:28.166126 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2025-10-08 15:15:28.166136 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2025-10-08 15:15:28.166147 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2025-10-08 15:15:28.166157 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2025-10-08 15:15:28.166168 | orchestrator | 2025-10-08 15:15:28.166179 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2025-10-08 15:15:28.166189 | orchestrator | Wednesday 08 October 2025 15:15:27 +0000 (0:00:05.825) 0:00:07.061 ***** 2025-10-08 15:15:28.166200 | orchestrator | skipping: [testbed-manager] 2025-10-08 15:15:28.166210 | orchestrator | 2025-10-08 15:15:28.166221 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2025-10-08 15:15:28.166231 | orchestrator | Wednesday 08 October 2025 15:15:27 +0000 (0:00:00.083) 0:00:07.144 ***** 2025-10-08 15:15:28.166244 | orchestrator | changed: [testbed-manager] 2025-10-08 15:15:28.166257 | orchestrator | 2025-10-08 15:15:28.166269 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-08 15:15:28.166282 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-10-08 15:15:28.166295 | orchestrator | 2025-10-08 15:15:28.166307 | orchestrator | 2025-10-08 15:15:28.166319 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-08 15:15:28.166331 | orchestrator | Wednesday 08 October 2025 15:15:27 +0000 (0:00:00.552) 0:00:07.697 ***** 2025-10-08 15:15:28.166343 | orchestrator | =============================================================================== 2025-10-08 15:15:28.166355 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.83s 2025-10-08 15:15:28.166368 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.56s 2025-10-08 15:15:28.166379 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.55s 2025-10-08 15:15:28.166391 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.52s 2025-10-08 15:15:28.166403 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.08s 2025-10-08 15:15:28.458537 | orchestrator | + osism apply known-hosts 2025-10-08 15:15:40.616570 | orchestrator | 2025-10-08 15:15:40 | INFO  | Task bdda2c52-0335-4ea3-85bc-214fcfefcb65 (known-hosts) was prepared for execution. 2025-10-08 15:15:40.616667 | orchestrator | 2025-10-08 15:15:40 | INFO  | It takes a moment until task bdda2c52-0335-4ea3-85bc-214fcfefcb65 (known-hosts) has been started and output is visible here. 2025-10-08 15:15:57.824675 | orchestrator | 2025-10-08 15:15:57.824790 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2025-10-08 15:15:57.824807 | orchestrator | 2025-10-08 15:15:57.824820 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2025-10-08 15:15:57.824832 | orchestrator | Wednesday 08 October 2025 15:15:44 +0000 (0:00:00.187) 0:00:00.187 ***** 2025-10-08 15:15:57.824843 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-10-08 15:15:57.824855 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-10-08 15:15:57.824866 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-10-08 15:15:57.824877 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-10-08 15:15:57.824888 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-10-08 15:15:57.824898 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-10-08 15:15:57.824983 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-10-08 15:15:57.824997 | orchestrator | 2025-10-08 15:15:57.825018 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2025-10-08 15:15:57.825030 | orchestrator | Wednesday 08 October 2025 15:15:50 +0000 (0:00:05.998) 0:00:06.185 ***** 2025-10-08 15:15:57.825043 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-10-08 15:15:57.825056 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-10-08 15:15:57.825067 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-10-08 15:15:57.825078 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-10-08 15:15:57.825089 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-10-08 15:15:57.825101 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-10-08 15:15:57.825111 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-10-08 15:15:57.825122 | orchestrator | 2025-10-08 15:15:57.825133 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-10-08 15:15:57.825143 | orchestrator | Wednesday 08 October 2025 15:15:50 +0000 (0:00:00.173) 0:00:06.358 ***** 2025-10-08 15:15:57.825157 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDAb8v0GAtxVxnB96DhtFRd9dECrssLQlLwwjy0/UU3rYukQyNLWZ6l+XaGkisBrF4gbvQbOpL1sRP69N1h+9f47xBfv1utZQ7dhckrxi3Sga3+vInCje+edq5UL7PEm5KVmxqdSUvdCPPlWnpwVP/5K0dwmPejqQdT1DXS7ra17a4rm0FXMao9/ioZFkhNfkddj3ISoqXuz325tO5kZMUSMDkgJYG+PvbdxyvFrzoslKpHUD1I0ZdnMHRO1SWUCTuqEk/pK1i+G2Mm2M/mTZ6HvfDsJ+N5VACQeQLGh42mMZ7nGf4yUfK07gNRd3UCyaa+egovxIfv5+PKg+ggtg8TELRdPe4JlKsGDV3eN09uR3QlfoGYMQkF3q+uFqorq0+oq1Dvup7wB9nGEVBcaoaHpuQ2WqS8qieqgFaRLiT5YfU5cxli03sYTWItvLukqIC5nJnBKrJgbIBkYAG5lCabBlL4Hr6WjNL4otoEMCI9GkJP+XUIm8R5Zo/++NTTEA8=) 2025-10-08 15:15:57.825172 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHpuhXnnrGs99qbxkI+wnC8jwbcQerH1pXzN1fdH2Kxcrn/XRW/szOS+Hav0/6I4zkcGAwAFpoPuM/LHvrhIPqc=) 2025-10-08 15:15:57.825185 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIO7Wlywkp/0JT/rpAFoHBdowHML90vg+xMqrIOWIzltl) 2025-10-08 15:15:57.825198 | orchestrator | 2025-10-08 15:15:57.825209 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-10-08 15:15:57.825219 | orchestrator | Wednesday 08 October 2025 15:15:52 +0000 (0:00:01.305) 0:00:07.664 ***** 2025-10-08 15:15:57.825230 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBBwkbovjEAU2yGgDLUpwaFAerdwEOBwprFtsZuItq27W+cNKomrhDBir37CGQRo5+D78HwUNBo3/2A4XZ9kL0M=) 2025-10-08 15:15:57.825270 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDM92JXvmQ8fCmuhOTqPrqaSXlbni5v8kZtq7zkJ0/Yf3xQOTN0o8mxOhs4q0wKFCUFDloVTOPg+WMg5wlJDm0WvJSqwVwvJfow6nSf4cWILj9IAWNOxxNiofkOQ8NR4raw+GN6rLufJrG+3i28m9t1qM07jj0XR0aMgam2Z90f7pSxIHJ5l3CIej9L0RxtUz3Vtt4efTmtqBVDHJMreN0dm8Qmn3mrC+CEcordO3c16aonP3tyJiYZiEEk7RJ1Gt4VhoARt3l/B/5rXl8fC2ne8+jWpdOxrYf7Co43ow7TFtHnUj5P1CAoKA2TNvHT4iZm0cqBMySIvgwuNZhsvGt3R7DbSdIxhE8bYudKE+ClGpN7W3lAnfe14W3nzfJYQXbl7JxNDGTYDW+BrzAufqhYGlZZxmnQG+sQ8UaCna/Q0vhJshODsAuKDLQerdE3fNZmCBWBJDUSR6FdKssXwvVdm8MeFeHv8WH1eXldDjrlJUV6lxU2lh0O9REReNXRwoM=) 2025-10-08 15:15:57.825291 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIK4pgBSsjM+0lGD8AShrEzdTCvdGgIfJ30jz7ZU37Iwg) 2025-10-08 15:15:57.825302 | orchestrator | 2025-10-08 15:15:57.825313 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-10-08 15:15:57.825324 | orchestrator | Wednesday 08 October 2025 15:15:53 +0000 (0:00:01.070) 0:00:08.735 ***** 2025-10-08 15:15:57.825405 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCKsAq8/Pb//zvWiZorOshjBaewLNSAM+3gTaeqAnSEPDJd74G2yUkQT4UEV/hzBJIPJMl37Smn9Ag5F5cyqWwk2h3W6ZWTN65d8we5OkiHaJVpZD4vFEoIagn5bK0ugs5teIX4+2kianqKTTnnkGJ+jr0g9pfMAMMO/S+oFEQrMqk7y63Xg+L/AQnoq+bvUO6vrt4JO8IRdg4hqKBJwJFUI3rNnwB5RJxrAnb78bCTEMykdqLNg8rhvVazwU5JDHiQ0lULLOks4KmlBDj6YZMiGEi4LspG6RAV8Ps6YQ689oBsdhpgumRpkpMilkkbJta6RT0ctZqn6LBwO6OgCGI5VYy+oscMNmW620XvZXuuwTMPgLAZpEVdmPctfiLd2Z/NNBoXs0gc7nE/87x+g2wMq9qNMr6ZJYBXfl8X5e/Mt1aZ26ywUVVJ5x7gwlzyq7Mv/X/KQvP+h4WH/fbsInu2/Lw/KgNKtWaUzMqtXc3FbSawC4ObYdlOgDQRc/f1vW8=) 2025-10-08 15:15:57.825418 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGn71FZj+OaQvaF1TuHnBLouofre4eiaJZv2AiBsEvc4aHvfzEqZxqQluSk+7xDXpse6DQgAwPVlQUCUdkaJr9I=) 2025-10-08 15:15:57.825429 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAycBgE7EFJNviiz0VyY8545uVWUpAGt2ryG4M1FwH21) 2025-10-08 15:15:57.825440 | orchestrator | 2025-10-08 15:15:57.825451 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-10-08 15:15:57.825462 | orchestrator | Wednesday 08 October 2025 15:15:54 +0000 (0:00:01.078) 0:00:09.813 ***** 2025-10-08 15:15:57.825472 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILGNgQGyLXy8VBx/VAAVzyWEuzvfEt1CeMrmHBUzGlLh) 2025-10-08 15:15:57.825484 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC23o84ySNw2Bz/PhZ4egSO4EofM/JjHpMZ5ZmMf/LFWO13bBdvesE4Wk2EuNDT+2lLpyev4NDoIN7+xcFpagpdNVtioy20bFxwbx82J0C/u6Tw10isUOX8nZjxG+dtG4Ob3c8Pwq5pNWjdmrtYk0PHmGGuzAsrp6G//owhOWWUaO8fNxpIF3cdJbnujLP8tEJxYjZ0/cRH3bHRpdZs35g2ne9AhluACvSHenGkQaoAy8DXMB+0ENxoK5Tbn/9EtQaLRqALtx+7ntnxNXDPjWuZs3AUCtAFaPa3btNI3WCUI5bTPsSDltiiTqXmfe560AjBHw7yv8vvnklgOqa4mnteXxYufweFP3FU6EF7MEiY33qdyX9WGm4/YLQfCAGkE61PeOWDzUQsxpK24IunFCh8OuMQGLZMcyYYRwhfAta/4JTSHJ2zVQ5r0+ZFXZmsEMqWa7FtlGZhAEbSxdIl7t+cWzHdyHmRRqZxgg1mVs96ylofStkkQzGv9SXR5TcHon0=) 2025-10-08 15:15:57.825495 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEE51Ktksw8cjMjd18/I6N+rYUqMet8CdxAfBKIZhmOJVevp4/er3Rs9F7xaU+fOI8ro/pxSmu5nlBAAYvszyZI=) 2025-10-08 15:15:57.825506 | orchestrator | 2025-10-08 15:15:57.825517 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-10-08 15:15:57.825528 | orchestrator | Wednesday 08 October 2025 15:15:55 +0000 (0:00:01.100) 0:00:10.913 ***** 2025-10-08 15:15:57.825539 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC0emiHGvX1sasAQF+apNUKn2GTb4mPz/lTf5s7/0i4/ncERam9c9ZM+fTnFp85ivAXHcKMfQO3glVy4x//ZXzcCXodaEiNMQgY230WovT+6XvqAt/+Q3pJyn949A6/AINqOSQs2eE2V3Hdq1+KImdM43JmMREkbbcW3Y5SyFmv5SjBcKlg3y3stQ9Ee4KWHSu0PUra9K1liwoyNZ+jjm1NE24PrpOgel1eHZZ13wWdFyhsPnjIMBEiRtWahvHlkvKC/E/jkYzE388jKzgO2MUbbkce8fMo86BI0aHruFxot6yoGLFaGrq/6HYoDTL9s3M83xLiPDOCXUPrx90+w07TeabZqf5JGgGbtmnRvGQ7peENbznyjwy727wXfph57wmrTXSf5f9DPBcxNBxyto+Y/gADTzT6r89bptIp2aWM/2JBZ32oti/FQ3LSfbAx2z4wxXG7hYM0YqJIXXMIyDoxUqibNW0bi2l1mA3pOCrUJxKQfiPngnyARkiJ5hPj4hk=) 2025-10-08 15:15:57.825550 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCe4+odVo2cKU1WmfK1E1ffN7jEaPoNHr2ngk1Rj4Iu4zrImvaX2IAgLvp9eHfB86WCBsii+UMLmZx7zlbr7xk8=) 2025-10-08 15:15:57.825569 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIId0Yqi0HAzUM64cnMg++tEB2PcSerqSoHG6QIU6X8cJ) 2025-10-08 15:15:57.825579 | orchestrator | 2025-10-08 15:15:57.825590 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-10-08 15:15:57.825601 | orchestrator | Wednesday 08 October 2025 15:15:56 +0000 (0:00:01.160) 0:00:12.074 ***** 2025-10-08 15:15:57.825621 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQChhcojRAVevnnNreXCDMuxKKoLQnEKOpHHIa7tVqnEM8qQ/ioSRWYFaSOdeON7gPi31q+aLQ3qun5wUZ62bNjrt/RzANyR9BGHYJkznRbQisdHxiWnat/x2+ZTSsmNnhrVVuwAiRoL8g4qA5F0dsIAgSfheaxEZWDwssavTlx27TFQS/T3Z+zAPcJODzT+XxBH4U67pHVcU+c/8FIJL5jna71Ai4rTm2w//wp9tmUrmUFRhccZZR1bAKUywP6EKliW/YT9aTQTt0Z3uup/3dBUHUicafANoMq4Vx7EeqZofbnMubCU4vVhkXC/nd2gPX6Xe0ljEfqAHmdDlhmovlUIHdZDN8CT0+vtG2Euk+vT86hP8FLEZ5/VQbGp43FPqZWSlZgR+Un9CaJseKjiJ9I4wuh/19FjkFwhfishW+QTgXgA4KGAwv+UOpXg4u8IjUZTLu2rEwCUaNBUOXGx2p+1kZUtWnH4ZK7QThMqi0YadTKpHBiX3+hwR74sTClCqSc=) 2025-10-08 15:16:09.170219 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFAPsdsdBzMxOyUh2WByMejoStqpYKf9gNiNbdlCODzl) 2025-10-08 15:16:09.170310 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFXF1repC7zCLl0LmsJvaScYKEBY1RP/9h0zM7QZlSPvPrhnMyDH8DoG4S+C/2eD/jRYxzdbtAEXbEVur4N54y4=) 2025-10-08 15:16:09.170321 | orchestrator | 2025-10-08 15:16:09.170329 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-10-08 15:16:09.170337 | orchestrator | Wednesday 08 October 2025 15:15:57 +0000 (0:00:01.121) 0:00:13.195 ***** 2025-10-08 15:16:09.170345 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDU02m0gvtXXVfdnuHA0FQmUoWw2hLPnUfaivog2A+IhZKoFH2van00F67Ckm+FlZKiR9th3UDPXcQIUvwaJMPneCMOutvvGt2MCnElMmhV669hV618Cxr3jSrTCtaXoYeBJlcdtXcUzuYCy/xT3cQvEz7RPkiriKh7QKnWPPGlOJiGofH3MXiywRPK+nBbiAWjFJ8dRuE2SXxN8XxmjwpOvS1Y2o5wTj5xyg0g3YkE74giWYAMYbDz+nRYCM8K9mscmddanslqEyuI9e1EU3WDZ1Vsys2ZJz4TrLh+ebwsGi9dXyHRVhMKi3Au5lyo9gd1jtFj61XOc8L5NaBgpIij5bjswlqa0urqRHDXv2ANUyUA8gAhFAG8CLls8RFJfhNC9dB5dE9QQrnKXvJOM9/patuElNZAJYVsppZqAsN9CTDER+e1O0/3w2vF+EwvRDiZDrPJxebDHet2Ad5LyOgXeTCDHjGGPYjQt2tIbasMypozdTLMoMhLiOxNyR7XHGc=) 2025-10-08 15:16:09.170353 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBd6rVM6doZP+RSzZZd93fZmPeyXnenIFsxWb+eo5N27QdyR2PBnyK+Wd4fueojAx18n0/pJpLvApvnqRmsOBco=) 2025-10-08 15:16:09.170359 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICVaJDDRwUtI47me50Gg4DmYnGJensir07PUnLVk2POU) 2025-10-08 15:16:09.170366 | orchestrator | 2025-10-08 15:16:09.170386 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2025-10-08 15:16:09.170394 | orchestrator | Wednesday 08 October 2025 15:15:58 +0000 (0:00:01.128) 0:00:14.324 ***** 2025-10-08 15:16:09.170401 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-10-08 15:16:09.170408 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-10-08 15:16:09.170414 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-10-08 15:16:09.170420 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-10-08 15:16:09.170426 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-10-08 15:16:09.170433 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-10-08 15:16:09.170444 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-10-08 15:16:09.170450 | orchestrator | 2025-10-08 15:16:09.170457 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2025-10-08 15:16:09.170463 | orchestrator | Wednesday 08 October 2025 15:16:04 +0000 (0:00:05.454) 0:00:19.778 ***** 2025-10-08 15:16:09.170490 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-10-08 15:16:09.170498 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-10-08 15:16:09.170504 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-10-08 15:16:09.170510 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-10-08 15:16:09.170516 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-10-08 15:16:09.170523 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-10-08 15:16:09.170529 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-10-08 15:16:09.170535 | orchestrator | 2025-10-08 15:16:09.170541 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-10-08 15:16:09.170547 | orchestrator | Wednesday 08 October 2025 15:16:04 +0000 (0:00:00.180) 0:00:19.959 ***** 2025-10-08 15:16:09.170553 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIO7Wlywkp/0JT/rpAFoHBdowHML90vg+xMqrIOWIzltl) 2025-10-08 15:16:09.170579 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDAb8v0GAtxVxnB96DhtFRd9dECrssLQlLwwjy0/UU3rYukQyNLWZ6l+XaGkisBrF4gbvQbOpL1sRP69N1h+9f47xBfv1utZQ7dhckrxi3Sga3+vInCje+edq5UL7PEm5KVmxqdSUvdCPPlWnpwVP/5K0dwmPejqQdT1DXS7ra17a4rm0FXMao9/ioZFkhNfkddj3ISoqXuz325tO5kZMUSMDkgJYG+PvbdxyvFrzoslKpHUD1I0ZdnMHRO1SWUCTuqEk/pK1i+G2Mm2M/mTZ6HvfDsJ+N5VACQeQLGh42mMZ7nGf4yUfK07gNRd3UCyaa+egovxIfv5+PKg+ggtg8TELRdPe4JlKsGDV3eN09uR3QlfoGYMQkF3q+uFqorq0+oq1Dvup7wB9nGEVBcaoaHpuQ2WqS8qieqgFaRLiT5YfU5cxli03sYTWItvLukqIC5nJnBKrJgbIBkYAG5lCabBlL4Hr6WjNL4otoEMCI9GkJP+XUIm8R5Zo/++NTTEA8=) 2025-10-08 15:16:09.170586 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHpuhXnnrGs99qbxkI+wnC8jwbcQerH1pXzN1fdH2Kxcrn/XRW/szOS+Hav0/6I4zkcGAwAFpoPuM/LHvrhIPqc=) 2025-10-08 15:16:09.170593 | orchestrator | 2025-10-08 15:16:09.170599 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-10-08 15:16:09.170605 | orchestrator | Wednesday 08 October 2025 15:16:05 +0000 (0:00:01.225) 0:00:21.184 ***** 2025-10-08 15:16:09.170611 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIK4pgBSsjM+0lGD8AShrEzdTCvdGgIfJ30jz7ZU37Iwg) 2025-10-08 15:16:09.170618 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDM92JXvmQ8fCmuhOTqPrqaSXlbni5v8kZtq7zkJ0/Yf3xQOTN0o8mxOhs4q0wKFCUFDloVTOPg+WMg5wlJDm0WvJSqwVwvJfow6nSf4cWILj9IAWNOxxNiofkOQ8NR4raw+GN6rLufJrG+3i28m9t1qM07jj0XR0aMgam2Z90f7pSxIHJ5l3CIej9L0RxtUz3Vtt4efTmtqBVDHJMreN0dm8Qmn3mrC+CEcordO3c16aonP3tyJiYZiEEk7RJ1Gt4VhoARt3l/B/5rXl8fC2ne8+jWpdOxrYf7Co43ow7TFtHnUj5P1CAoKA2TNvHT4iZm0cqBMySIvgwuNZhsvGt3R7DbSdIxhE8bYudKE+ClGpN7W3lAnfe14W3nzfJYQXbl7JxNDGTYDW+BrzAufqhYGlZZxmnQG+sQ8UaCna/Q0vhJshODsAuKDLQerdE3fNZmCBWBJDUSR6FdKssXwvVdm8MeFeHv8WH1eXldDjrlJUV6lxU2lh0O9REReNXRwoM=) 2025-10-08 15:16:09.170624 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBBwkbovjEAU2yGgDLUpwaFAerdwEOBwprFtsZuItq27W+cNKomrhDBir37CGQRo5+D78HwUNBo3/2A4XZ9kL0M=) 2025-10-08 15:16:09.170637 | orchestrator | 2025-10-08 15:16:09.170643 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-10-08 15:16:09.170649 | orchestrator | Wednesday 08 October 2025 15:16:06 +0000 (0:00:01.101) 0:00:22.285 ***** 2025-10-08 15:16:09.170656 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGn71FZj+OaQvaF1TuHnBLouofre4eiaJZv2AiBsEvc4aHvfzEqZxqQluSk+7xDXpse6DQgAwPVlQUCUdkaJr9I=) 2025-10-08 15:16:09.170662 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAycBgE7EFJNviiz0VyY8545uVWUpAGt2ryG4M1FwH21) 2025-10-08 15:16:09.170669 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCKsAq8/Pb//zvWiZorOshjBaewLNSAM+3gTaeqAnSEPDJd74G2yUkQT4UEV/hzBJIPJMl37Smn9Ag5F5cyqWwk2h3W6ZWTN65d8we5OkiHaJVpZD4vFEoIagn5bK0ugs5teIX4+2kianqKTTnnkGJ+jr0g9pfMAMMO/S+oFEQrMqk7y63Xg+L/AQnoq+bvUO6vrt4JO8IRdg4hqKBJwJFUI3rNnwB5RJxrAnb78bCTEMykdqLNg8rhvVazwU5JDHiQ0lULLOks4KmlBDj6YZMiGEi4LspG6RAV8Ps6YQ689oBsdhpgumRpkpMilkkbJta6RT0ctZqn6LBwO6OgCGI5VYy+oscMNmW620XvZXuuwTMPgLAZpEVdmPctfiLd2Z/NNBoXs0gc7nE/87x+g2wMq9qNMr6ZJYBXfl8X5e/Mt1aZ26ywUVVJ5x7gwlzyq7Mv/X/KQvP+h4WH/fbsInu2/Lw/KgNKtWaUzMqtXc3FbSawC4ObYdlOgDQRc/f1vW8=) 2025-10-08 15:16:09.170675 | orchestrator | 2025-10-08 15:16:09.170681 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-10-08 15:16:09.170688 | orchestrator | Wednesday 08 October 2025 15:16:08 +0000 (0:00:01.149) 0:00:23.435 ***** 2025-10-08 15:16:09.170698 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEE51Ktksw8cjMjd18/I6N+rYUqMet8CdxAfBKIZhmOJVevp4/er3Rs9F7xaU+fOI8ro/pxSmu5nlBAAYvszyZI=) 2025-10-08 15:16:09.170705 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC23o84ySNw2Bz/PhZ4egSO4EofM/JjHpMZ5ZmMf/LFWO13bBdvesE4Wk2EuNDT+2lLpyev4NDoIN7+xcFpagpdNVtioy20bFxwbx82J0C/u6Tw10isUOX8nZjxG+dtG4Ob3c8Pwq5pNWjdmrtYk0PHmGGuzAsrp6G//owhOWWUaO8fNxpIF3cdJbnujLP8tEJxYjZ0/cRH3bHRpdZs35g2ne9AhluACvSHenGkQaoAy8DXMB+0ENxoK5Tbn/9EtQaLRqALtx+7ntnxNXDPjWuZs3AUCtAFaPa3btNI3WCUI5bTPsSDltiiTqXmfe560AjBHw7yv8vvnklgOqa4mnteXxYufweFP3FU6EF7MEiY33qdyX9WGm4/YLQfCAGkE61PeOWDzUQsxpK24IunFCh8OuMQGLZMcyYYRwhfAta/4JTSHJ2zVQ5r0+ZFXZmsEMqWa7FtlGZhAEbSxdIl7t+cWzHdyHmRRqZxgg1mVs96ylofStkkQzGv9SXR5TcHon0=) 2025-10-08 15:16:09.170718 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILGNgQGyLXy8VBx/VAAVzyWEuzvfEt1CeMrmHBUzGlLh) 2025-10-08 15:16:14.859384 | orchestrator | 2025-10-08 15:16:14.859499 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-10-08 15:16:14.859517 | orchestrator | Wednesday 08 October 2025 15:16:09 +0000 (0:00:01.108) 0:00:24.543 ***** 2025-10-08 15:16:14.859529 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIId0Yqi0HAzUM64cnMg++tEB2PcSerqSoHG6QIU6X8cJ) 2025-10-08 15:16:14.859545 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC0emiHGvX1sasAQF+apNUKn2GTb4mPz/lTf5s7/0i4/ncERam9c9ZM+fTnFp85ivAXHcKMfQO3glVy4x//ZXzcCXodaEiNMQgY230WovT+6XvqAt/+Q3pJyn949A6/AINqOSQs2eE2V3Hdq1+KImdM43JmMREkbbcW3Y5SyFmv5SjBcKlg3y3stQ9Ee4KWHSu0PUra9K1liwoyNZ+jjm1NE24PrpOgel1eHZZ13wWdFyhsPnjIMBEiRtWahvHlkvKC/E/jkYzE388jKzgO2MUbbkce8fMo86BI0aHruFxot6yoGLFaGrq/6HYoDTL9s3M83xLiPDOCXUPrx90+w07TeabZqf5JGgGbtmnRvGQ7peENbznyjwy727wXfph57wmrTXSf5f9DPBcxNBxyto+Y/gADTzT6r89bptIp2aWM/2JBZ32oti/FQ3LSfbAx2z4wxXG7hYM0YqJIXXMIyDoxUqibNW0bi2l1mA3pOCrUJxKQfiPngnyARkiJ5hPj4hk=) 2025-10-08 15:16:14.859561 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCe4+odVo2cKU1WmfK1E1ffN7jEaPoNHr2ngk1Rj4Iu4zrImvaX2IAgLvp9eHfB86WCBsii+UMLmZx7zlbr7xk8=) 2025-10-08 15:16:14.859575 | orchestrator | 2025-10-08 15:16:14.859587 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-10-08 15:16:14.859622 | orchestrator | Wednesday 08 October 2025 15:16:10 +0000 (0:00:01.123) 0:00:25.666 ***** 2025-10-08 15:16:14.859634 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQChhcojRAVevnnNreXCDMuxKKoLQnEKOpHHIa7tVqnEM8qQ/ioSRWYFaSOdeON7gPi31q+aLQ3qun5wUZ62bNjrt/RzANyR9BGHYJkznRbQisdHxiWnat/x2+ZTSsmNnhrVVuwAiRoL8g4qA5F0dsIAgSfheaxEZWDwssavTlx27TFQS/T3Z+zAPcJODzT+XxBH4U67pHVcU+c/8FIJL5jna71Ai4rTm2w//wp9tmUrmUFRhccZZR1bAKUywP6EKliW/YT9aTQTt0Z3uup/3dBUHUicafANoMq4Vx7EeqZofbnMubCU4vVhkXC/nd2gPX6Xe0ljEfqAHmdDlhmovlUIHdZDN8CT0+vtG2Euk+vT86hP8FLEZ5/VQbGp43FPqZWSlZgR+Un9CaJseKjiJ9I4wuh/19FjkFwhfishW+QTgXgA4KGAwv+UOpXg4u8IjUZTLu2rEwCUaNBUOXGx2p+1kZUtWnH4ZK7QThMqi0YadTKpHBiX3+hwR74sTClCqSc=) 2025-10-08 15:16:14.859646 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFXF1repC7zCLl0LmsJvaScYKEBY1RP/9h0zM7QZlSPvPrhnMyDH8DoG4S+C/2eD/jRYxzdbtAEXbEVur4N54y4=) 2025-10-08 15:16:14.859658 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFAPsdsdBzMxOyUh2WByMejoStqpYKf9gNiNbdlCODzl) 2025-10-08 15:16:14.859669 | orchestrator | 2025-10-08 15:16:14.859680 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-10-08 15:16:14.859690 | orchestrator | Wednesday 08 October 2025 15:16:11 +0000 (0:00:01.101) 0:00:26.768 ***** 2025-10-08 15:16:14.859701 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDU02m0gvtXXVfdnuHA0FQmUoWw2hLPnUfaivog2A+IhZKoFH2van00F67Ckm+FlZKiR9th3UDPXcQIUvwaJMPneCMOutvvGt2MCnElMmhV669hV618Cxr3jSrTCtaXoYeBJlcdtXcUzuYCy/xT3cQvEz7RPkiriKh7QKnWPPGlOJiGofH3MXiywRPK+nBbiAWjFJ8dRuE2SXxN8XxmjwpOvS1Y2o5wTj5xyg0g3YkE74giWYAMYbDz+nRYCM8K9mscmddanslqEyuI9e1EU3WDZ1Vsys2ZJz4TrLh+ebwsGi9dXyHRVhMKi3Au5lyo9gd1jtFj61XOc8L5NaBgpIij5bjswlqa0urqRHDXv2ANUyUA8gAhFAG8CLls8RFJfhNC9dB5dE9QQrnKXvJOM9/patuElNZAJYVsppZqAsN9CTDER+e1O0/3w2vF+EwvRDiZDrPJxebDHet2Ad5LyOgXeTCDHjGGPYjQt2tIbasMypozdTLMoMhLiOxNyR7XHGc=) 2025-10-08 15:16:14.859712 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBd6rVM6doZP+RSzZZd93fZmPeyXnenIFsxWb+eo5N27QdyR2PBnyK+Wd4fueojAx18n0/pJpLvApvnqRmsOBco=) 2025-10-08 15:16:14.859724 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICVaJDDRwUtI47me50Gg4DmYnGJensir07PUnLVk2POU) 2025-10-08 15:16:14.859734 | orchestrator | 2025-10-08 15:16:14.859745 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2025-10-08 15:16:14.859756 | orchestrator | Wednesday 08 October 2025 15:16:13 +0000 (0:00:02.100) 0:00:28.869 ***** 2025-10-08 15:16:14.859767 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-10-08 15:16:14.859778 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-10-08 15:16:14.859788 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-10-08 15:16:14.859799 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-10-08 15:16:14.859810 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-10-08 15:16:14.859820 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-10-08 15:16:14.859831 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-10-08 15:16:14.859842 | orchestrator | skipping: [testbed-manager] 2025-10-08 15:16:14.859853 | orchestrator | 2025-10-08 15:16:14.859882 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2025-10-08 15:16:14.859893 | orchestrator | Wednesday 08 October 2025 15:16:13 +0000 (0:00:00.166) 0:00:29.036 ***** 2025-10-08 15:16:14.859904 | orchestrator | skipping: [testbed-manager] 2025-10-08 15:16:14.859973 | orchestrator | 2025-10-08 15:16:14.859986 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2025-10-08 15:16:14.859998 | orchestrator | Wednesday 08 October 2025 15:16:13 +0000 (0:00:00.058) 0:00:29.095 ***** 2025-10-08 15:16:14.860020 | orchestrator | skipping: [testbed-manager] 2025-10-08 15:16:14.860032 | orchestrator | 2025-10-08 15:16:14.860044 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2025-10-08 15:16:14.860056 | orchestrator | Wednesday 08 October 2025 15:16:13 +0000 (0:00:00.066) 0:00:29.162 ***** 2025-10-08 15:16:14.860067 | orchestrator | changed: [testbed-manager] 2025-10-08 15:16:14.860078 | orchestrator | 2025-10-08 15:16:14.860090 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-08 15:16:14.860102 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-10-08 15:16:14.860115 | orchestrator | 2025-10-08 15:16:14.860126 | orchestrator | 2025-10-08 15:16:14.860138 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-08 15:16:14.860150 | orchestrator | Wednesday 08 October 2025 15:16:14 +0000 (0:00:00.850) 0:00:30.012 ***** 2025-10-08 15:16:14.860162 | orchestrator | =============================================================================== 2025-10-08 15:16:14.860174 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 6.00s 2025-10-08 15:16:14.860185 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.45s 2025-10-08 15:16:14.860198 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 2.10s 2025-10-08 15:16:14.860209 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.31s 2025-10-08 15:16:14.860221 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.23s 2025-10-08 15:16:14.860251 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.16s 2025-10-08 15:16:14.860264 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.15s 2025-10-08 15:16:14.860276 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.13s 2025-10-08 15:16:14.860287 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.12s 2025-10-08 15:16:14.860298 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.12s 2025-10-08 15:16:14.860308 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.11s 2025-10-08 15:16:14.860319 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.10s 2025-10-08 15:16:14.860334 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.10s 2025-10-08 15:16:14.860346 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.10s 2025-10-08 15:16:14.860356 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2025-10-08 15:16:14.860367 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2025-10-08 15:16:14.860378 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.85s 2025-10-08 15:16:14.860388 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.18s 2025-10-08 15:16:14.860399 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.17s 2025-10-08 15:16:14.860410 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.17s 2025-10-08 15:16:15.201875 | orchestrator | + osism apply squid 2025-10-08 15:16:27.438537 | orchestrator | 2025-10-08 15:16:27 | INFO  | Task 5657696b-c71e-4f43-9f6b-73631b979529 (squid) was prepared for execution. 2025-10-08 15:16:27.438651 | orchestrator | 2025-10-08 15:16:27 | INFO  | It takes a moment until task 5657696b-c71e-4f43-9f6b-73631b979529 (squid) has been started and output is visible here. 2025-10-08 15:18:21.775490 | orchestrator | 2025-10-08 15:18:21.775611 | orchestrator | PLAY [Apply role squid] ******************************************************** 2025-10-08 15:18:21.775628 | orchestrator | 2025-10-08 15:18:21.775640 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2025-10-08 15:18:21.775652 | orchestrator | Wednesday 08 October 2025 15:16:31 +0000 (0:00:00.166) 0:00:00.166 ***** 2025-10-08 15:18:21.775690 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2025-10-08 15:18:21.775703 | orchestrator | 2025-10-08 15:18:21.775715 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2025-10-08 15:18:21.775726 | orchestrator | Wednesday 08 October 2025 15:16:31 +0000 (0:00:00.095) 0:00:00.261 ***** 2025-10-08 15:18:21.775737 | orchestrator | ok: [testbed-manager] 2025-10-08 15:18:21.775749 | orchestrator | 2025-10-08 15:18:21.775760 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2025-10-08 15:18:21.775770 | orchestrator | Wednesday 08 October 2025 15:16:33 +0000 (0:00:01.538) 0:00:01.799 ***** 2025-10-08 15:18:21.775781 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2025-10-08 15:18:21.775792 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2025-10-08 15:18:21.775803 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2025-10-08 15:18:21.775814 | orchestrator | 2025-10-08 15:18:21.775825 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2025-10-08 15:18:21.775835 | orchestrator | Wednesday 08 October 2025 15:16:34 +0000 (0:00:01.168) 0:00:02.968 ***** 2025-10-08 15:18:21.775846 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2025-10-08 15:18:21.775857 | orchestrator | 2025-10-08 15:18:21.775868 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2025-10-08 15:18:21.775879 | orchestrator | Wednesday 08 October 2025 15:16:35 +0000 (0:00:01.107) 0:00:04.075 ***** 2025-10-08 15:18:21.775889 | orchestrator | ok: [testbed-manager] 2025-10-08 15:18:21.775900 | orchestrator | 2025-10-08 15:18:21.775911 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2025-10-08 15:18:21.775921 | orchestrator | Wednesday 08 October 2025 15:16:35 +0000 (0:00:00.393) 0:00:04.469 ***** 2025-10-08 15:18:21.775982 | orchestrator | changed: [testbed-manager] 2025-10-08 15:18:21.775995 | orchestrator | 2025-10-08 15:18:21.776005 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2025-10-08 15:18:21.776016 | orchestrator | Wednesday 08 October 2025 15:16:36 +0000 (0:00:00.978) 0:00:05.447 ***** 2025-10-08 15:18:21.776027 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2025-10-08 15:18:21.776040 | orchestrator | ok: [testbed-manager] 2025-10-08 15:18:21.776053 | orchestrator | 2025-10-08 15:18:21.776065 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2025-10-08 15:18:21.776078 | orchestrator | Wednesday 08 October 2025 15:17:08 +0000 (0:00:31.751) 0:00:37.199 ***** 2025-10-08 15:18:21.776090 | orchestrator | changed: [testbed-manager] 2025-10-08 15:18:21.776102 | orchestrator | 2025-10-08 15:18:21.776114 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2025-10-08 15:18:21.776126 | orchestrator | Wednesday 08 October 2025 15:17:20 +0000 (0:00:12.086) 0:00:49.286 ***** 2025-10-08 15:18:21.776138 | orchestrator | Pausing for 60 seconds 2025-10-08 15:18:21.776151 | orchestrator | changed: [testbed-manager] 2025-10-08 15:18:21.776163 | orchestrator | 2025-10-08 15:18:21.776175 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2025-10-08 15:18:21.776187 | orchestrator | Wednesday 08 October 2025 15:18:20 +0000 (0:01:00.080) 0:01:49.366 ***** 2025-10-08 15:18:21.776199 | orchestrator | ok: [testbed-manager] 2025-10-08 15:18:21.776211 | orchestrator | 2025-10-08 15:18:21.776223 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2025-10-08 15:18:21.776236 | orchestrator | Wednesday 08 October 2025 15:18:20 +0000 (0:00:00.074) 0:01:49.440 ***** 2025-10-08 15:18:21.776248 | orchestrator | changed: [testbed-manager] 2025-10-08 15:18:21.776260 | orchestrator | 2025-10-08 15:18:21.776273 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-08 15:18:21.776285 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-08 15:18:21.776305 | orchestrator | 2025-10-08 15:18:21.776318 | orchestrator | 2025-10-08 15:18:21.776330 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-08 15:18:21.776342 | orchestrator | Wednesday 08 October 2025 15:18:21 +0000 (0:00:00.659) 0:01:50.100 ***** 2025-10-08 15:18:21.776354 | orchestrator | =============================================================================== 2025-10-08 15:18:21.776367 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.08s 2025-10-08 15:18:21.776380 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 31.75s 2025-10-08 15:18:21.776391 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.09s 2025-10-08 15:18:21.776401 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.54s 2025-10-08 15:18:21.776412 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.17s 2025-10-08 15:18:21.776423 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.11s 2025-10-08 15:18:21.776433 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.98s 2025-10-08 15:18:21.776444 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.66s 2025-10-08 15:18:21.776455 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.39s 2025-10-08 15:18:21.776465 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.10s 2025-10-08 15:18:21.776476 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.07s 2025-10-08 15:18:22.096488 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-10-08 15:18:22.096580 | orchestrator | ++ semver latest 9.0.0 2025-10-08 15:18:22.149859 | orchestrator | + [[ -1 -lt 0 ]] 2025-10-08 15:18:22.149985 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-10-08 15:18:22.151190 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2025-10-08 15:18:34.182254 | orchestrator | 2025-10-08 15:18:34 | INFO  | Task 6edb509d-6a59-4fc6-9789-ab10c02f9c98 (operator) was prepared for execution. 2025-10-08 15:18:34.182375 | orchestrator | 2025-10-08 15:18:34 | INFO  | It takes a moment until task 6edb509d-6a59-4fc6-9789-ab10c02f9c98 (operator) has been started and output is visible here. 2025-10-08 15:18:49.995447 | orchestrator | 2025-10-08 15:18:49.995559 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2025-10-08 15:18:49.995576 | orchestrator | 2025-10-08 15:18:49.995589 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-10-08 15:18:49.995601 | orchestrator | Wednesday 08 October 2025 15:18:38 +0000 (0:00:00.143) 0:00:00.143 ***** 2025-10-08 15:18:49.995612 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:18:49.995624 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:18:49.995635 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:18:49.995646 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:18:49.995661 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:18:49.995679 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:18:49.995696 | orchestrator | 2025-10-08 15:18:49.995714 | orchestrator | TASK [Do not require tty for all users] **************************************** 2025-10-08 15:18:49.995732 | orchestrator | Wednesday 08 October 2025 15:18:41 +0000 (0:00:03.250) 0:00:03.394 ***** 2025-10-08 15:18:49.995748 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:18:49.995765 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:18:49.995784 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:18:49.995804 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:18:49.995822 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:18:49.995841 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:18:49.995857 | orchestrator | 2025-10-08 15:18:49.995869 | orchestrator | PLAY [Apply role operator] ***************************************************** 2025-10-08 15:18:49.995880 | orchestrator | 2025-10-08 15:18:49.995891 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-10-08 15:18:49.995902 | orchestrator | Wednesday 08 October 2025 15:18:42 +0000 (0:00:00.810) 0:00:04.204 ***** 2025-10-08 15:18:49.995913 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:18:49.995991 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:18:49.996006 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:18:49.996036 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:18:49.996048 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:18:49.996060 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:18:49.996072 | orchestrator | 2025-10-08 15:18:49.996084 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-10-08 15:18:49.996097 | orchestrator | Wednesday 08 October 2025 15:18:42 +0000 (0:00:00.196) 0:00:04.401 ***** 2025-10-08 15:18:49.996108 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:18:49.996120 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:18:49.996132 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:18:49.996144 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:18:49.996156 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:18:49.996172 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:18:49.996192 | orchestrator | 2025-10-08 15:18:49.996210 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-10-08 15:18:49.996229 | orchestrator | Wednesday 08 October 2025 15:18:42 +0000 (0:00:00.172) 0:00:04.573 ***** 2025-10-08 15:18:49.996249 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:18:49.996269 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:18:49.996291 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:18:49.996310 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:18:49.996327 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:18:49.996338 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:18:49.996349 | orchestrator | 2025-10-08 15:18:49.996360 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-10-08 15:18:49.996370 | orchestrator | Wednesday 08 October 2025 15:18:43 +0000 (0:00:00.627) 0:00:05.200 ***** 2025-10-08 15:18:49.996381 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:18:49.996391 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:18:49.996402 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:18:49.996413 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:18:49.996423 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:18:49.996434 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:18:49.996444 | orchestrator | 2025-10-08 15:18:49.996455 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-10-08 15:18:49.996465 | orchestrator | Wednesday 08 October 2025 15:18:44 +0000 (0:00:00.909) 0:00:06.110 ***** 2025-10-08 15:18:49.996476 | orchestrator | changed: [testbed-node-0] => (item=adm) 2025-10-08 15:18:49.996487 | orchestrator | changed: [testbed-node-1] => (item=adm) 2025-10-08 15:18:49.996504 | orchestrator | changed: [testbed-node-2] => (item=adm) 2025-10-08 15:18:49.996515 | orchestrator | changed: [testbed-node-5] => (item=adm) 2025-10-08 15:18:49.996526 | orchestrator | changed: [testbed-node-3] => (item=adm) 2025-10-08 15:18:49.996536 | orchestrator | changed: [testbed-node-4] => (item=adm) 2025-10-08 15:18:49.996547 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2025-10-08 15:18:49.996557 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2025-10-08 15:18:49.996568 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2025-10-08 15:18:49.996578 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2025-10-08 15:18:49.996589 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2025-10-08 15:18:49.996599 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2025-10-08 15:18:49.996610 | orchestrator | 2025-10-08 15:18:49.996621 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-10-08 15:18:49.996631 | orchestrator | Wednesday 08 October 2025 15:18:45 +0000 (0:00:01.168) 0:00:07.278 ***** 2025-10-08 15:18:49.996642 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:18:49.996652 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:18:49.996663 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:18:49.996673 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:18:49.996684 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:18:49.996694 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:18:49.996714 | orchestrator | 2025-10-08 15:18:49.996725 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-10-08 15:18:49.996737 | orchestrator | Wednesday 08 October 2025 15:18:46 +0000 (0:00:01.209) 0:00:08.487 ***** 2025-10-08 15:18:49.996747 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2025-10-08 15:18:49.996758 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2025-10-08 15:18:49.996768 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2025-10-08 15:18:49.996779 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2025-10-08 15:18:49.996809 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2025-10-08 15:18:49.996820 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2025-10-08 15:18:49.996831 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2025-10-08 15:18:49.996841 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2025-10-08 15:18:49.996852 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2025-10-08 15:18:49.996862 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2025-10-08 15:18:49.996873 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2025-10-08 15:18:49.996883 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2025-10-08 15:18:49.996894 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2025-10-08 15:18:49.996904 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2025-10-08 15:18:49.996915 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2025-10-08 15:18:49.996925 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2025-10-08 15:18:49.996962 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2025-10-08 15:18:49.996973 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2025-10-08 15:18:49.996984 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2025-10-08 15:18:49.996994 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2025-10-08 15:18:49.997005 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2025-10-08 15:18:49.997015 | orchestrator | 2025-10-08 15:18:49.997026 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2025-10-08 15:18:49.997037 | orchestrator | Wednesday 08 October 2025 15:18:47 +0000 (0:00:01.178) 0:00:09.666 ***** 2025-10-08 15:18:49.997048 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:18:49.997058 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:18:49.997069 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:18:49.997079 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:18:49.997090 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:18:49.997101 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:18:49.997111 | orchestrator | 2025-10-08 15:18:49.997122 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-10-08 15:18:49.997133 | orchestrator | Wednesday 08 October 2025 15:18:48 +0000 (0:00:00.205) 0:00:09.872 ***** 2025-10-08 15:18:49.997143 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:18:49.997154 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:18:49.997164 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:18:49.997175 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:18:49.997186 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:18:49.997196 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:18:49.997207 | orchestrator | 2025-10-08 15:18:49.997218 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-10-08 15:18:49.997228 | orchestrator | Wednesday 08 October 2025 15:18:48 +0000 (0:00:00.528) 0:00:10.400 ***** 2025-10-08 15:18:49.997239 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:18:49.997250 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:18:49.997268 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:18:49.997279 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:18:49.997290 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:18:49.997300 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:18:49.997311 | orchestrator | 2025-10-08 15:18:49.997322 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-10-08 15:18:49.997332 | orchestrator | Wednesday 08 October 2025 15:18:48 +0000 (0:00:00.201) 0:00:10.601 ***** 2025-10-08 15:18:49.997343 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-10-08 15:18:49.997354 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:18:49.997364 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-10-08 15:18:49.997375 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-10-08 15:18:49.997386 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-10-08 15:18:49.997396 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:18:49.997407 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-10-08 15:18:49.997417 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:18:49.997428 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-10-08 15:18:49.997439 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:18:49.997449 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:18:49.997460 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:18:49.997470 | orchestrator | 2025-10-08 15:18:49.997481 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-10-08 15:18:49.997492 | orchestrator | Wednesday 08 October 2025 15:18:49 +0000 (0:00:00.714) 0:00:11.316 ***** 2025-10-08 15:18:49.997502 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:18:49.997513 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:18:49.997524 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:18:49.997534 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:18:49.997545 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:18:49.997555 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:18:49.997566 | orchestrator | 2025-10-08 15:18:49.997577 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-10-08 15:18:49.997587 | orchestrator | Wednesday 08 October 2025 15:18:49 +0000 (0:00:00.184) 0:00:11.500 ***** 2025-10-08 15:18:49.997598 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:18:49.997609 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:18:49.997619 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:18:49.997630 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:18:49.997640 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:18:49.997651 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:18:49.997662 | orchestrator | 2025-10-08 15:18:49.997672 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-10-08 15:18:49.997683 | orchestrator | Wednesday 08 October 2025 15:18:49 +0000 (0:00:00.158) 0:00:11.659 ***** 2025-10-08 15:18:49.997694 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:18:49.997709 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:18:49.997728 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:18:49.997748 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:18:49.997776 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:18:51.161006 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:18:51.161107 | orchestrator | 2025-10-08 15:18:51.161122 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-10-08 15:18:51.161135 | orchestrator | Wednesday 08 October 2025 15:18:49 +0000 (0:00:00.149) 0:00:11.809 ***** 2025-10-08 15:18:51.161147 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:18:51.161158 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:18:51.161168 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:18:51.161179 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:18:51.161190 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:18:51.161202 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:18:51.161213 | orchestrator | 2025-10-08 15:18:51.161224 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-10-08 15:18:51.161261 | orchestrator | Wednesday 08 October 2025 15:18:50 +0000 (0:00:00.657) 0:00:12.466 ***** 2025-10-08 15:18:51.161273 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:18:51.161283 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:18:51.161294 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:18:51.161305 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:18:51.161315 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:18:51.161326 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:18:51.161337 | orchestrator | 2025-10-08 15:18:51.161348 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-08 15:18:51.161377 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-10-08 15:18:51.161390 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-10-08 15:18:51.161400 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-10-08 15:18:51.161412 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-10-08 15:18:51.161422 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-10-08 15:18:51.161433 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-10-08 15:18:51.161444 | orchestrator | 2025-10-08 15:18:51.161455 | orchestrator | 2025-10-08 15:18:51.161465 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-08 15:18:51.161476 | orchestrator | Wednesday 08 October 2025 15:18:50 +0000 (0:00:00.236) 0:00:12.702 ***** 2025-10-08 15:18:51.161487 | orchestrator | =============================================================================== 2025-10-08 15:18:51.161498 | orchestrator | Gathering Facts --------------------------------------------------------- 3.25s 2025-10-08 15:18:51.161509 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.21s 2025-10-08 15:18:51.161522 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.18s 2025-10-08 15:18:51.161534 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.17s 2025-10-08 15:18:51.161546 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.91s 2025-10-08 15:18:51.161557 | orchestrator | Do not require tty for all users ---------------------------------------- 0.81s 2025-10-08 15:18:51.161575 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.71s 2025-10-08 15:18:51.161587 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.66s 2025-10-08 15:18:51.161599 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.63s 2025-10-08 15:18:51.161611 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.53s 2025-10-08 15:18:51.161623 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.24s 2025-10-08 15:18:51.161635 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.21s 2025-10-08 15:18:51.161647 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.20s 2025-10-08 15:18:51.161659 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.20s 2025-10-08 15:18:51.161670 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.18s 2025-10-08 15:18:51.161682 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.17s 2025-10-08 15:18:51.161695 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.16s 2025-10-08 15:18:51.161713 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.15s 2025-10-08 15:18:51.517151 | orchestrator | + osism apply --environment custom facts 2025-10-08 15:18:53.439455 | orchestrator | 2025-10-08 15:18:53 | INFO  | Trying to run play facts in environment custom 2025-10-08 15:19:03.637000 | orchestrator | 2025-10-08 15:19:03 | INFO  | Task aaf68c2e-3558-45a5-b2d8-62d238bbea65 (facts) was prepared for execution. 2025-10-08 15:19:03.637117 | orchestrator | 2025-10-08 15:19:03 | INFO  | It takes a moment until task aaf68c2e-3558-45a5-b2d8-62d238bbea65 (facts) has been started and output is visible here. 2025-10-08 15:19:48.190997 | orchestrator | 2025-10-08 15:19:48.191116 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2025-10-08 15:19:48.191134 | orchestrator | 2025-10-08 15:19:48.191146 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-10-08 15:19:48.191157 | orchestrator | Wednesday 08 October 2025 15:19:07 +0000 (0:00:00.088) 0:00:00.088 ***** 2025-10-08 15:19:48.191169 | orchestrator | ok: [testbed-manager] 2025-10-08 15:19:48.191181 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:19:48.191193 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:19:48.191204 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:19:48.191215 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:19:48.191226 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:19:48.191237 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:19:48.191247 | orchestrator | 2025-10-08 15:19:48.191258 | orchestrator | TASK [Copy fact file] ********************************************************** 2025-10-08 15:19:48.191269 | orchestrator | Wednesday 08 October 2025 15:19:09 +0000 (0:00:01.366) 0:00:01.455 ***** 2025-10-08 15:19:48.191280 | orchestrator | ok: [testbed-manager] 2025-10-08 15:19:48.191291 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:19:48.191302 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:19:48.191313 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:19:48.191324 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:19:48.191335 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:19:48.191346 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:19:48.191356 | orchestrator | 2025-10-08 15:19:48.191368 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2025-10-08 15:19:48.191379 | orchestrator | 2025-10-08 15:19:48.191390 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-10-08 15:19:48.191401 | orchestrator | Wednesday 08 October 2025 15:19:10 +0000 (0:00:01.190) 0:00:02.646 ***** 2025-10-08 15:19:48.191412 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:19:48.191423 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:19:48.191434 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:19:48.191445 | orchestrator | 2025-10-08 15:19:48.191456 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-10-08 15:19:48.191467 | orchestrator | Wednesday 08 October 2025 15:19:10 +0000 (0:00:00.113) 0:00:02.759 ***** 2025-10-08 15:19:48.191480 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:19:48.191492 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:19:48.191504 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:19:48.191516 | orchestrator | 2025-10-08 15:19:48.191528 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-10-08 15:19:48.191540 | orchestrator | Wednesday 08 October 2025 15:19:10 +0000 (0:00:00.201) 0:00:02.961 ***** 2025-10-08 15:19:48.191553 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:19:48.191565 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:19:48.191577 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:19:48.191589 | orchestrator | 2025-10-08 15:19:48.191602 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-10-08 15:19:48.191614 | orchestrator | Wednesday 08 October 2025 15:19:10 +0000 (0:00:00.206) 0:00:03.168 ***** 2025-10-08 15:19:48.191627 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-10-08 15:19:48.191668 | orchestrator | 2025-10-08 15:19:48.191681 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-10-08 15:19:48.191694 | orchestrator | Wednesday 08 October 2025 15:19:11 +0000 (0:00:00.146) 0:00:03.314 ***** 2025-10-08 15:19:48.191706 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:19:48.191718 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:19:48.191730 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:19:48.191742 | orchestrator | 2025-10-08 15:19:48.191754 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-10-08 15:19:48.191766 | orchestrator | Wednesday 08 October 2025 15:19:11 +0000 (0:00:00.460) 0:00:03.775 ***** 2025-10-08 15:19:48.191778 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:19:48.191790 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:19:48.191802 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:19:48.191814 | orchestrator | 2025-10-08 15:19:48.191827 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-10-08 15:19:48.191838 | orchestrator | Wednesday 08 October 2025 15:19:11 +0000 (0:00:00.138) 0:00:03.913 ***** 2025-10-08 15:19:48.191849 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:19:48.191859 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:19:48.191870 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:19:48.191881 | orchestrator | 2025-10-08 15:19:48.191892 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-10-08 15:19:48.191902 | orchestrator | Wednesday 08 October 2025 15:19:12 +0000 (0:00:01.057) 0:00:04.970 ***** 2025-10-08 15:19:48.191913 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:19:48.191924 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:19:48.191958 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:19:48.191969 | orchestrator | 2025-10-08 15:19:48.191980 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-10-08 15:19:48.191991 | orchestrator | Wednesday 08 October 2025 15:19:13 +0000 (0:00:00.468) 0:00:05.439 ***** 2025-10-08 15:19:48.192002 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:19:48.192013 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:19:48.192024 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:19:48.192035 | orchestrator | 2025-10-08 15:19:48.192046 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-10-08 15:19:48.192057 | orchestrator | Wednesday 08 October 2025 15:19:14 +0000 (0:00:01.059) 0:00:06.498 ***** 2025-10-08 15:19:48.192067 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:19:48.192078 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:19:48.192089 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:19:48.192100 | orchestrator | 2025-10-08 15:19:48.192111 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2025-10-08 15:19:48.192121 | orchestrator | Wednesday 08 October 2025 15:19:30 +0000 (0:00:16.722) 0:00:23.220 ***** 2025-10-08 15:19:48.192132 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:19:48.192143 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:19:48.192154 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:19:48.192165 | orchestrator | 2025-10-08 15:19:48.192176 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2025-10-08 15:19:48.192204 | orchestrator | Wednesday 08 October 2025 15:19:31 +0000 (0:00:00.097) 0:00:23.318 ***** 2025-10-08 15:19:48.192215 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:19:48.192226 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:19:48.192237 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:19:48.192247 | orchestrator | 2025-10-08 15:19:48.192258 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-10-08 15:19:48.192269 | orchestrator | Wednesday 08 October 2025 15:19:38 +0000 (0:00:07.666) 0:00:30.984 ***** 2025-10-08 15:19:48.192280 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:19:48.192291 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:19:48.192302 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:19:48.192313 | orchestrator | 2025-10-08 15:19:48.192331 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-10-08 15:19:48.192343 | orchestrator | Wednesday 08 October 2025 15:19:39 +0000 (0:00:00.467) 0:00:31.452 ***** 2025-10-08 15:19:48.192354 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2025-10-08 15:19:48.192364 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2025-10-08 15:19:48.192375 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2025-10-08 15:19:48.192386 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2025-10-08 15:19:48.192397 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2025-10-08 15:19:48.192408 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2025-10-08 15:19:48.192418 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2025-10-08 15:19:48.192429 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2025-10-08 15:19:48.192440 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2025-10-08 15:19:48.192451 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2025-10-08 15:19:48.192461 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2025-10-08 15:19:48.192472 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2025-10-08 15:19:48.192483 | orchestrator | 2025-10-08 15:19:48.192494 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-10-08 15:19:48.192504 | orchestrator | Wednesday 08 October 2025 15:19:42 +0000 (0:00:03.675) 0:00:35.128 ***** 2025-10-08 15:19:48.192515 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:19:48.192526 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:19:48.192537 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:19:48.192548 | orchestrator | 2025-10-08 15:19:48.192558 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-10-08 15:19:48.192569 | orchestrator | 2025-10-08 15:19:48.192580 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-10-08 15:19:48.192591 | orchestrator | Wednesday 08 October 2025 15:19:44 +0000 (0:00:01.411) 0:00:36.539 ***** 2025-10-08 15:19:48.192602 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:19:48.192661 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:19:48.192674 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:19:48.192684 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:19:48.192695 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:19:48.192706 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:19:48.192717 | orchestrator | ok: [testbed-manager] 2025-10-08 15:19:48.192728 | orchestrator | 2025-10-08 15:19:48.192739 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-08 15:19:48.192750 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-08 15:19:48.192762 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-08 15:19:48.192778 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-08 15:19:48.192789 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-08 15:19:48.192800 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-08 15:19:48.192811 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-08 15:19:48.192822 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-08 15:19:48.192840 | orchestrator | 2025-10-08 15:19:48.192851 | orchestrator | 2025-10-08 15:19:48.192862 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-08 15:19:48.192873 | orchestrator | Wednesday 08 October 2025 15:19:48 +0000 (0:00:03.890) 0:00:40.430 ***** 2025-10-08 15:19:48.192883 | orchestrator | =============================================================================== 2025-10-08 15:19:48.192894 | orchestrator | osism.commons.repository : Update package cache ------------------------ 16.72s 2025-10-08 15:19:48.192905 | orchestrator | Install required packages (Debian) -------------------------------------- 7.67s 2025-10-08 15:19:48.192916 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.89s 2025-10-08 15:19:48.192942 | orchestrator | Copy fact files --------------------------------------------------------- 3.68s 2025-10-08 15:19:48.192953 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.41s 2025-10-08 15:19:48.192964 | orchestrator | Create custom facts directory ------------------------------------------- 1.37s 2025-10-08 15:19:48.192982 | orchestrator | Copy fact file ---------------------------------------------------------- 1.19s 2025-10-08 15:19:48.406597 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.06s 2025-10-08 15:19:48.406674 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.06s 2025-10-08 15:19:48.406685 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.47s 2025-10-08 15:19:48.406696 | orchestrator | Create custom facts directory ------------------------------------------- 0.47s 2025-10-08 15:19:48.406707 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.46s 2025-10-08 15:19:48.406718 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.21s 2025-10-08 15:19:48.406729 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.20s 2025-10-08 15:19:48.406740 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.15s 2025-10-08 15:19:48.406751 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.14s 2025-10-08 15:19:48.406762 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.11s 2025-10-08 15:19:48.406773 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.10s 2025-10-08 15:19:48.773525 | orchestrator | + osism apply bootstrap 2025-10-08 15:20:00.784184 | orchestrator | 2025-10-08 15:20:00 | INFO  | Task b595b639-fee5-4cb8-8b99-2de655466efb (bootstrap) was prepared for execution. 2025-10-08 15:20:00.784268 | orchestrator | 2025-10-08 15:20:00 | INFO  | It takes a moment until task b595b639-fee5-4cb8-8b99-2de655466efb (bootstrap) has been started and output is visible here. 2025-10-08 15:20:18.172225 | orchestrator | 2025-10-08 15:20:18.172342 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2025-10-08 15:20:18.172359 | orchestrator | 2025-10-08 15:20:18.172371 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2025-10-08 15:20:18.172383 | orchestrator | Wednesday 08 October 2025 15:20:05 +0000 (0:00:00.152) 0:00:00.152 ***** 2025-10-08 15:20:18.172394 | orchestrator | ok: [testbed-manager] 2025-10-08 15:20:18.172405 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:20:18.172417 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:20:18.172428 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:20:18.172438 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:20:18.172449 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:20:18.172459 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:20:18.172470 | orchestrator | 2025-10-08 15:20:18.172481 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-10-08 15:20:18.172492 | orchestrator | 2025-10-08 15:20:18.172503 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-10-08 15:20:18.172514 | orchestrator | Wednesday 08 October 2025 15:20:05 +0000 (0:00:00.293) 0:00:00.445 ***** 2025-10-08 15:20:18.172524 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:20:18.172535 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:20:18.172572 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:20:18.172583 | orchestrator | ok: [testbed-manager] 2025-10-08 15:20:18.172594 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:20:18.172604 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:20:18.172615 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:20:18.172625 | orchestrator | 2025-10-08 15:20:18.172636 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2025-10-08 15:20:18.172646 | orchestrator | 2025-10-08 15:20:18.172657 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-10-08 15:20:18.172668 | orchestrator | Wednesday 08 October 2025 15:20:09 +0000 (0:00:03.792) 0:00:04.238 ***** 2025-10-08 15:20:18.172679 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-10-08 15:20:18.172690 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-10-08 15:20:18.172701 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-10-08 15:20:18.172727 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2025-10-08 15:20:18.172738 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-10-08 15:20:18.172749 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-10-08 15:20:18.172761 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-10-08 15:20:18.172773 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-10-08 15:20:18.172786 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2025-10-08 15:20:18.172797 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-10-08 15:20:18.172810 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2025-10-08 15:20:18.172822 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-10-08 15:20:18.172835 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-10-08 15:20:18.172847 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-10-08 15:20:18.172859 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-10-08 15:20:18.172871 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-10-08 15:20:18.172883 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-10-08 15:20:18.172895 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-10-08 15:20:18.172907 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-10-08 15:20:18.172919 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-10-08 15:20:18.172965 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2025-10-08 15:20:18.172977 | orchestrator | skipping: [testbed-manager] 2025-10-08 15:20:18.172989 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-10-08 15:20:18.173001 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-10-08 15:20:18.173013 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:20:18.173025 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-10-08 15:20:18.173037 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-10-08 15:20:18.173048 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-10-08 15:20:18.173060 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-10-08 15:20:18.173072 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-10-08 15:20:18.173084 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:20:18.173096 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2025-10-08 15:20:18.173108 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-10-08 15:20:18.173119 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-10-08 15:20:18.173130 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-10-08 15:20:18.173140 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:20:18.173151 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-10-08 15:20:18.173161 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-10-08 15:20:18.173181 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-10-08 15:20:18.173192 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-10-08 15:20:18.173202 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2025-10-08 15:20:18.173213 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-10-08 15:20:18.173223 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-10-08 15:20:18.173234 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-10-08 15:20:18.173244 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-10-08 15:20:18.173255 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-10-08 15:20:18.173283 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-10-08 15:20:18.173294 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-10-08 15:20:18.173305 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:20:18.173315 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-10-08 15:20:18.173326 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-10-08 15:20:18.173336 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:20:18.173347 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-10-08 15:20:18.173357 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-10-08 15:20:18.173367 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-10-08 15:20:18.173378 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:20:18.173388 | orchestrator | 2025-10-08 15:20:18.173399 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2025-10-08 15:20:18.173410 | orchestrator | 2025-10-08 15:20:18.173420 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2025-10-08 15:20:18.173431 | orchestrator | Wednesday 08 October 2025 15:20:09 +0000 (0:00:00.541) 0:00:04.779 ***** 2025-10-08 15:20:18.173441 | orchestrator | ok: [testbed-manager] 2025-10-08 15:20:18.173452 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:20:18.173462 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:20:18.173473 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:20:18.173483 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:20:18.173493 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:20:18.173504 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:20:18.173514 | orchestrator | 2025-10-08 15:20:18.173525 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2025-10-08 15:20:18.173535 | orchestrator | Wednesday 08 October 2025 15:20:11 +0000 (0:00:01.331) 0:00:06.111 ***** 2025-10-08 15:20:18.173546 | orchestrator | ok: [testbed-manager] 2025-10-08 15:20:18.173556 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:20:18.173566 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:20:18.173577 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:20:18.173587 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:20:18.173598 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:20:18.173608 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:20:18.173619 | orchestrator | 2025-10-08 15:20:18.173630 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2025-10-08 15:20:18.173640 | orchestrator | Wednesday 08 October 2025 15:20:12 +0000 (0:00:01.329) 0:00:07.441 ***** 2025-10-08 15:20:18.173652 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-08 15:20:18.173665 | orchestrator | 2025-10-08 15:20:18.173676 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2025-10-08 15:20:18.173687 | orchestrator | Wednesday 08 October 2025 15:20:12 +0000 (0:00:00.279) 0:00:07.721 ***** 2025-10-08 15:20:18.173697 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:20:18.173708 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:20:18.173718 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:20:18.173735 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:20:18.173746 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:20:18.173757 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:20:18.173767 | orchestrator | changed: [testbed-manager] 2025-10-08 15:20:18.173777 | orchestrator | 2025-10-08 15:20:18.173788 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2025-10-08 15:20:18.173799 | orchestrator | Wednesday 08 October 2025 15:20:15 +0000 (0:00:02.887) 0:00:10.608 ***** 2025-10-08 15:20:18.173809 | orchestrator | skipping: [testbed-manager] 2025-10-08 15:20:18.173821 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-08 15:20:18.173834 | orchestrator | 2025-10-08 15:20:18.173845 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2025-10-08 15:20:18.173856 | orchestrator | Wednesday 08 October 2025 15:20:15 +0000 (0:00:00.295) 0:00:10.904 ***** 2025-10-08 15:20:18.173874 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:20:18.173885 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:20:18.173896 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:20:18.173906 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:20:18.173917 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:20:18.173949 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:20:18.173960 | orchestrator | 2025-10-08 15:20:18.173971 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2025-10-08 15:20:18.173982 | orchestrator | Wednesday 08 October 2025 15:20:16 +0000 (0:00:01.138) 0:00:12.042 ***** 2025-10-08 15:20:18.173992 | orchestrator | skipping: [testbed-manager] 2025-10-08 15:20:18.174003 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:20:18.174074 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:20:18.174089 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:20:18.174099 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:20:18.174110 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:20:18.174120 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:20:18.174131 | orchestrator | 2025-10-08 15:20:18.174142 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2025-10-08 15:20:18.174153 | orchestrator | Wednesday 08 October 2025 15:20:17 +0000 (0:00:00.597) 0:00:12.640 ***** 2025-10-08 15:20:18.174163 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:20:18.174174 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:20:18.174184 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:20:18.174195 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:20:18.174205 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:20:18.174216 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:20:18.174226 | orchestrator | ok: [testbed-manager] 2025-10-08 15:20:18.174237 | orchestrator | 2025-10-08 15:20:18.174247 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-10-08 15:20:18.174259 | orchestrator | Wednesday 08 October 2025 15:20:18 +0000 (0:00:00.438) 0:00:13.079 ***** 2025-10-08 15:20:18.174270 | orchestrator | skipping: [testbed-manager] 2025-10-08 15:20:18.174280 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:20:18.174299 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:20:31.179865 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:20:31.180026 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:20:31.180043 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:20:31.180055 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:20:31.180066 | orchestrator | 2025-10-08 15:20:31.180079 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-10-08 15:20:31.180091 | orchestrator | Wednesday 08 October 2025 15:20:18 +0000 (0:00:00.240) 0:00:13.320 ***** 2025-10-08 15:20:31.180104 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-08 15:20:31.180158 | orchestrator | 2025-10-08 15:20:31.180171 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-10-08 15:20:31.180182 | orchestrator | Wednesday 08 October 2025 15:20:18 +0000 (0:00:00.344) 0:00:13.664 ***** 2025-10-08 15:20:31.180194 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-08 15:20:31.180205 | orchestrator | 2025-10-08 15:20:31.180216 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-10-08 15:20:31.180227 | orchestrator | Wednesday 08 October 2025 15:20:18 +0000 (0:00:00.311) 0:00:13.976 ***** 2025-10-08 15:20:31.180238 | orchestrator | ok: [testbed-manager] 2025-10-08 15:20:31.180250 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:20:31.180260 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:20:31.180271 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:20:31.180281 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:20:31.180292 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:20:31.180311 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:20:31.180322 | orchestrator | 2025-10-08 15:20:31.180333 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-10-08 15:20:31.180344 | orchestrator | Wednesday 08 October 2025 15:20:20 +0000 (0:00:01.605) 0:00:15.582 ***** 2025-10-08 15:20:31.180355 | orchestrator | skipping: [testbed-manager] 2025-10-08 15:20:31.180367 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:20:31.180379 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:20:31.180391 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:20:31.180404 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:20:31.180416 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:20:31.180428 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:20:31.180440 | orchestrator | 2025-10-08 15:20:31.180452 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-10-08 15:20:31.180464 | orchestrator | Wednesday 08 October 2025 15:20:20 +0000 (0:00:00.228) 0:00:15.810 ***** 2025-10-08 15:20:31.180476 | orchestrator | ok: [testbed-manager] 2025-10-08 15:20:31.180488 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:20:31.180500 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:20:31.180512 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:20:31.180524 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:20:31.180536 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:20:31.180548 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:20:31.180560 | orchestrator | 2025-10-08 15:20:31.180572 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-10-08 15:20:31.180584 | orchestrator | Wednesday 08 October 2025 15:20:21 +0000 (0:00:00.551) 0:00:16.362 ***** 2025-10-08 15:20:31.180596 | orchestrator | skipping: [testbed-manager] 2025-10-08 15:20:31.180609 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:20:31.180621 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:20:31.180632 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:20:31.180644 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:20:31.180656 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:20:31.180667 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:20:31.180679 | orchestrator | 2025-10-08 15:20:31.180691 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-10-08 15:20:31.180704 | orchestrator | Wednesday 08 October 2025 15:20:21 +0000 (0:00:00.263) 0:00:16.626 ***** 2025-10-08 15:20:31.180717 | orchestrator | ok: [testbed-manager] 2025-10-08 15:20:31.180727 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:20:31.180737 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:20:31.180748 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:20:31.180759 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:20:31.180769 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:20:31.180789 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:20:31.180800 | orchestrator | 2025-10-08 15:20:31.180811 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-10-08 15:20:31.180822 | orchestrator | Wednesday 08 October 2025 15:20:22 +0000 (0:00:00.614) 0:00:17.240 ***** 2025-10-08 15:20:31.180832 | orchestrator | ok: [testbed-manager] 2025-10-08 15:20:31.180843 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:20:31.180853 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:20:31.180864 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:20:31.180874 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:20:31.180885 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:20:31.180895 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:20:31.180906 | orchestrator | 2025-10-08 15:20:31.180917 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-10-08 15:20:31.180945 | orchestrator | Wednesday 08 October 2025 15:20:23 +0000 (0:00:01.168) 0:00:18.408 ***** 2025-10-08 15:20:31.180956 | orchestrator | ok: [testbed-manager] 2025-10-08 15:20:31.180967 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:20:31.180978 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:20:31.180988 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:20:31.180999 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:20:31.181009 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:20:31.181020 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:20:31.181031 | orchestrator | 2025-10-08 15:20:31.181041 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-10-08 15:20:31.181052 | orchestrator | Wednesday 08 October 2025 15:20:24 +0000 (0:00:01.222) 0:00:19.631 ***** 2025-10-08 15:20:31.181081 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-08 15:20:31.181093 | orchestrator | 2025-10-08 15:20:31.181104 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-10-08 15:20:31.181115 | orchestrator | Wednesday 08 October 2025 15:20:24 +0000 (0:00:00.339) 0:00:19.971 ***** 2025-10-08 15:20:31.181125 | orchestrator | skipping: [testbed-manager] 2025-10-08 15:20:31.181136 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:20:31.181146 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:20:31.181157 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:20:31.181167 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:20:31.181178 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:20:31.181188 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:20:31.181198 | orchestrator | 2025-10-08 15:20:31.181209 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-10-08 15:20:31.181220 | orchestrator | Wednesday 08 October 2025 15:20:26 +0000 (0:00:01.424) 0:00:21.395 ***** 2025-10-08 15:20:31.181231 | orchestrator | ok: [testbed-manager] 2025-10-08 15:20:31.181241 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:20:31.181252 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:20:31.181262 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:20:31.181273 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:20:31.181283 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:20:31.181294 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:20:31.181304 | orchestrator | 2025-10-08 15:20:31.181315 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-10-08 15:20:31.181326 | orchestrator | Wednesday 08 October 2025 15:20:26 +0000 (0:00:00.255) 0:00:21.651 ***** 2025-10-08 15:20:31.181336 | orchestrator | ok: [testbed-manager] 2025-10-08 15:20:31.181347 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:20:31.181357 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:20:31.181367 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:20:31.181378 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:20:31.181388 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:20:31.181403 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:20:31.181414 | orchestrator | 2025-10-08 15:20:31.181425 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-10-08 15:20:31.181443 | orchestrator | Wednesday 08 October 2025 15:20:26 +0000 (0:00:00.231) 0:00:21.882 ***** 2025-10-08 15:20:31.181454 | orchestrator | ok: [testbed-manager] 2025-10-08 15:20:31.181464 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:20:31.181475 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:20:31.181485 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:20:31.181495 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:20:31.181506 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:20:31.181516 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:20:31.181527 | orchestrator | 2025-10-08 15:20:31.181537 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-10-08 15:20:31.181548 | orchestrator | Wednesday 08 October 2025 15:20:27 +0000 (0:00:00.229) 0:00:22.112 ***** 2025-10-08 15:20:31.181559 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-08 15:20:31.181571 | orchestrator | 2025-10-08 15:20:31.181582 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-10-08 15:20:31.181593 | orchestrator | Wednesday 08 October 2025 15:20:27 +0000 (0:00:00.301) 0:00:22.413 ***** 2025-10-08 15:20:31.181603 | orchestrator | ok: [testbed-manager] 2025-10-08 15:20:31.181614 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:20:31.181624 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:20:31.181635 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:20:31.181645 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:20:31.181656 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:20:31.181666 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:20:31.181676 | orchestrator | 2025-10-08 15:20:31.181687 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-10-08 15:20:31.181698 | orchestrator | Wednesday 08 October 2025 15:20:27 +0000 (0:00:00.553) 0:00:22.966 ***** 2025-10-08 15:20:31.181708 | orchestrator | skipping: [testbed-manager] 2025-10-08 15:20:31.181719 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:20:31.181729 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:20:31.181740 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:20:31.181751 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:20:31.181761 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:20:31.181771 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:20:31.181782 | orchestrator | 2025-10-08 15:20:31.181792 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-10-08 15:20:31.181803 | orchestrator | Wednesday 08 October 2025 15:20:28 +0000 (0:00:00.231) 0:00:23.198 ***** 2025-10-08 15:20:31.181813 | orchestrator | ok: [testbed-manager] 2025-10-08 15:20:31.181824 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:20:31.181834 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:20:31.181845 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:20:31.181855 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:20:31.181866 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:20:31.181876 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:20:31.181887 | orchestrator | 2025-10-08 15:20:31.181897 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-10-08 15:20:31.181908 | orchestrator | Wednesday 08 October 2025 15:20:29 +0000 (0:00:01.114) 0:00:24.313 ***** 2025-10-08 15:20:31.181918 | orchestrator | ok: [testbed-manager] 2025-10-08 15:20:31.181946 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:20:31.181956 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:20:31.181967 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:20:31.181977 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:20:31.181988 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:20:31.181998 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:20:31.182009 | orchestrator | 2025-10-08 15:20:31.182082 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-10-08 15:20:31.182094 | orchestrator | Wednesday 08 October 2025 15:20:29 +0000 (0:00:00.673) 0:00:24.986 ***** 2025-10-08 15:20:31.182112 | orchestrator | ok: [testbed-manager] 2025-10-08 15:20:31.182123 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:20:31.182133 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:20:31.182144 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:20:31.182163 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:21:12.926374 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:21:12.926491 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:21:12.926508 | orchestrator | 2025-10-08 15:21:12.926522 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-10-08 15:21:12.926535 | orchestrator | Wednesday 08 October 2025 15:20:31 +0000 (0:00:01.222) 0:00:26.208 ***** 2025-10-08 15:21:12.926546 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:21:12.926557 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:21:12.926568 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:21:12.926578 | orchestrator | changed: [testbed-manager] 2025-10-08 15:21:12.926589 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:21:12.926600 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:21:12.926610 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:21:12.926621 | orchestrator | 2025-10-08 15:21:12.926632 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2025-10-08 15:21:12.926643 | orchestrator | Wednesday 08 October 2025 15:20:48 +0000 (0:00:17.434) 0:00:43.643 ***** 2025-10-08 15:21:12.926654 | orchestrator | ok: [testbed-manager] 2025-10-08 15:21:12.926664 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:21:12.926675 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:21:12.926686 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:21:12.926696 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:21:12.926707 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:21:12.926718 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:21:12.926728 | orchestrator | 2025-10-08 15:21:12.926739 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2025-10-08 15:21:12.926750 | orchestrator | Wednesday 08 October 2025 15:20:48 +0000 (0:00:00.273) 0:00:43.916 ***** 2025-10-08 15:21:12.926761 | orchestrator | ok: [testbed-manager] 2025-10-08 15:21:12.926771 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:21:12.926782 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:21:12.926792 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:21:12.926803 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:21:12.926813 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:21:12.926824 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:21:12.926835 | orchestrator | 2025-10-08 15:21:12.926846 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2025-10-08 15:21:12.926857 | orchestrator | Wednesday 08 October 2025 15:20:49 +0000 (0:00:00.233) 0:00:44.150 ***** 2025-10-08 15:21:12.926868 | orchestrator | ok: [testbed-manager] 2025-10-08 15:21:12.926879 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:21:12.926890 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:21:12.926901 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:21:12.926911 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:21:12.926922 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:21:12.926956 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:21:12.926967 | orchestrator | 2025-10-08 15:21:12.926978 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2025-10-08 15:21:12.926989 | orchestrator | Wednesday 08 October 2025 15:20:49 +0000 (0:00:00.213) 0:00:44.363 ***** 2025-10-08 15:21:12.927020 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-08 15:21:12.927035 | orchestrator | 2025-10-08 15:21:12.927046 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2025-10-08 15:21:12.927057 | orchestrator | Wednesday 08 October 2025 15:20:49 +0000 (0:00:00.273) 0:00:44.637 ***** 2025-10-08 15:21:12.927068 | orchestrator | ok: [testbed-manager] 2025-10-08 15:21:12.927104 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:21:12.927116 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:21:12.927126 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:21:12.927137 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:21:12.927147 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:21:12.927157 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:21:12.927168 | orchestrator | 2025-10-08 15:21:12.927179 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2025-10-08 15:21:12.927189 | orchestrator | Wednesday 08 October 2025 15:20:51 +0000 (0:00:02.091) 0:00:46.728 ***** 2025-10-08 15:21:12.927200 | orchestrator | changed: [testbed-manager] 2025-10-08 15:21:12.927211 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:21:12.927221 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:21:12.927232 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:21:12.927242 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:21:12.927253 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:21:12.927263 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:21:12.927273 | orchestrator | 2025-10-08 15:21:12.927284 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2025-10-08 15:21:12.927295 | orchestrator | Wednesday 08 October 2025 15:20:52 +0000 (0:00:01.119) 0:00:47.848 ***** 2025-10-08 15:21:12.927305 | orchestrator | ok: [testbed-manager] 2025-10-08 15:21:12.927316 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:21:12.927326 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:21:12.927337 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:21:12.927348 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:21:12.927358 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:21:12.927369 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:21:12.927379 | orchestrator | 2025-10-08 15:21:12.927390 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2025-10-08 15:21:12.927401 | orchestrator | Wednesday 08 October 2025 15:20:53 +0000 (0:00:00.856) 0:00:48.704 ***** 2025-10-08 15:21:12.927412 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-08 15:21:12.927425 | orchestrator | 2025-10-08 15:21:12.927436 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2025-10-08 15:21:12.927447 | orchestrator | Wednesday 08 October 2025 15:20:53 +0000 (0:00:00.324) 0:00:49.029 ***** 2025-10-08 15:21:12.927458 | orchestrator | changed: [testbed-manager] 2025-10-08 15:21:12.927468 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:21:12.927479 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:21:12.927489 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:21:12.927500 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:21:12.927511 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:21:12.927521 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:21:12.927532 | orchestrator | 2025-10-08 15:21:12.927559 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2025-10-08 15:21:12.927571 | orchestrator | Wednesday 08 October 2025 15:20:55 +0000 (0:00:01.090) 0:00:50.120 ***** 2025-10-08 15:21:12.927581 | orchestrator | skipping: [testbed-manager] 2025-10-08 15:21:12.927592 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:21:12.927602 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:21:12.927613 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:21:12.927623 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:21:12.927634 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:21:12.927644 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:21:12.927655 | orchestrator | 2025-10-08 15:21:12.927665 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2025-10-08 15:21:12.927676 | orchestrator | Wednesday 08 October 2025 15:20:55 +0000 (0:00:00.320) 0:00:50.440 ***** 2025-10-08 15:21:12.927686 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:21:12.927697 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:21:12.927715 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:21:12.927726 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:21:12.927737 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:21:12.927747 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:21:12.927758 | orchestrator | changed: [testbed-manager] 2025-10-08 15:21:12.927768 | orchestrator | 2025-10-08 15:21:12.927779 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2025-10-08 15:21:12.927790 | orchestrator | Wednesday 08 October 2025 15:21:08 +0000 (0:00:12.617) 0:01:03.058 ***** 2025-10-08 15:21:12.927800 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:21:12.927811 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:21:12.927822 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:21:12.927832 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:21:12.927843 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:21:12.927853 | orchestrator | ok: [testbed-manager] 2025-10-08 15:21:12.927864 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:21:12.927874 | orchestrator | 2025-10-08 15:21:12.927885 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2025-10-08 15:21:12.927901 | orchestrator | Wednesday 08 October 2025 15:21:08 +0000 (0:00:00.643) 0:01:03.701 ***** 2025-10-08 15:21:12.927912 | orchestrator | ok: [testbed-manager] 2025-10-08 15:21:12.927923 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:21:12.927963 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:21:12.927974 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:21:12.927985 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:21:12.927996 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:21:12.928006 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:21:12.928017 | orchestrator | 2025-10-08 15:21:12.928028 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2025-10-08 15:21:12.928038 | orchestrator | Wednesday 08 October 2025 15:21:09 +0000 (0:00:00.908) 0:01:04.610 ***** 2025-10-08 15:21:12.928049 | orchestrator | ok: [testbed-manager] 2025-10-08 15:21:12.928060 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:21:12.928070 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:21:12.928081 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:21:12.928091 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:21:12.928102 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:21:12.928112 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:21:12.928123 | orchestrator | 2025-10-08 15:21:12.928134 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2025-10-08 15:21:12.928145 | orchestrator | Wednesday 08 October 2025 15:21:09 +0000 (0:00:00.234) 0:01:04.845 ***** 2025-10-08 15:21:12.928155 | orchestrator | ok: [testbed-manager] 2025-10-08 15:21:12.928166 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:21:12.928176 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:21:12.928187 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:21:12.928197 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:21:12.928208 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:21:12.928218 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:21:12.928234 | orchestrator | 2025-10-08 15:21:12.928252 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2025-10-08 15:21:12.928271 | orchestrator | Wednesday 08 October 2025 15:21:10 +0000 (0:00:00.224) 0:01:05.069 ***** 2025-10-08 15:21:12.928290 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-08 15:21:12.928309 | orchestrator | 2025-10-08 15:21:12.928328 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2025-10-08 15:21:12.928343 | orchestrator | Wednesday 08 October 2025 15:21:10 +0000 (0:00:00.287) 0:01:05.357 ***** 2025-10-08 15:21:12.928354 | orchestrator | ok: [testbed-manager] 2025-10-08 15:21:12.928364 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:21:12.928375 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:21:12.928385 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:21:12.928396 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:21:12.928414 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:21:12.928425 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:21:12.928436 | orchestrator | 2025-10-08 15:21:12.928446 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2025-10-08 15:21:12.928457 | orchestrator | Wednesday 08 October 2025 15:21:12 +0000 (0:00:01.789) 0:01:07.146 ***** 2025-10-08 15:21:12.928468 | orchestrator | changed: [testbed-manager] 2025-10-08 15:21:12.928478 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:21:12.928489 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:21:12.928500 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:21:12.928510 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:21:12.928521 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:21:12.928531 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:21:12.928542 | orchestrator | 2025-10-08 15:21:12.928552 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2025-10-08 15:21:12.928563 | orchestrator | Wednesday 08 October 2025 15:21:12 +0000 (0:00:00.580) 0:01:07.727 ***** 2025-10-08 15:21:12.928574 | orchestrator | ok: [testbed-manager] 2025-10-08 15:21:12.928584 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:21:12.928595 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:21:12.928606 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:21:12.928616 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:21:12.928627 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:21:12.928637 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:21:12.928647 | orchestrator | 2025-10-08 15:21:12.928667 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2025-10-08 15:23:37.162972 | orchestrator | Wednesday 08 October 2025 15:21:12 +0000 (0:00:00.233) 0:01:07.960 ***** 2025-10-08 15:23:37.163091 | orchestrator | ok: [testbed-manager] 2025-10-08 15:23:37.163109 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:23:37.163120 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:23:37.163131 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:23:37.163142 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:23:37.163153 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:23:37.163164 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:23:37.163175 | orchestrator | 2025-10-08 15:23:37.163187 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2025-10-08 15:23:37.163198 | orchestrator | Wednesday 08 October 2025 15:21:14 +0000 (0:00:01.141) 0:01:09.102 ***** 2025-10-08 15:23:37.163209 | orchestrator | changed: [testbed-manager] 2025-10-08 15:23:37.163220 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:23:37.163231 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:23:37.163242 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:23:37.163252 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:23:37.163263 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:23:37.163273 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:23:37.163285 | orchestrator | 2025-10-08 15:23:37.163296 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2025-10-08 15:23:37.163307 | orchestrator | Wednesday 08 October 2025 15:21:15 +0000 (0:00:01.698) 0:01:10.801 ***** 2025-10-08 15:23:37.163318 | orchestrator | ok: [testbed-manager] 2025-10-08 15:23:37.163328 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:23:37.163339 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:23:37.163350 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:23:37.163360 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:23:37.163371 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:23:37.163381 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:23:37.163392 | orchestrator | 2025-10-08 15:23:37.163403 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2025-10-08 15:23:37.163414 | orchestrator | Wednesday 08 October 2025 15:21:18 +0000 (0:00:02.294) 0:01:13.095 ***** 2025-10-08 15:23:37.163425 | orchestrator | ok: [testbed-manager] 2025-10-08 15:23:37.163451 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:23:37.163462 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:23:37.163475 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:23:37.163514 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:23:37.163526 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:23:37.163538 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:23:37.163550 | orchestrator | 2025-10-08 15:23:37.163563 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2025-10-08 15:23:37.163575 | orchestrator | Wednesday 08 October 2025 15:21:59 +0000 (0:00:41.370) 0:01:54.465 ***** 2025-10-08 15:23:37.163586 | orchestrator | changed: [testbed-manager] 2025-10-08 15:23:37.163598 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:23:37.163610 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:23:37.163622 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:23:37.163634 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:23:37.163647 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:23:37.163659 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:23:37.163671 | orchestrator | 2025-10-08 15:23:37.163683 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2025-10-08 15:23:37.163695 | orchestrator | Wednesday 08 October 2025 15:23:16 +0000 (0:01:17.083) 0:03:11.548 ***** 2025-10-08 15:23:37.163707 | orchestrator | ok: [testbed-manager] 2025-10-08 15:23:37.163719 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:23:37.163731 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:23:37.163743 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:23:37.163755 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:23:37.163768 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:23:37.163780 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:23:37.163792 | orchestrator | 2025-10-08 15:23:37.163805 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2025-10-08 15:23:37.163818 | orchestrator | Wednesday 08 October 2025 15:23:18 +0000 (0:00:01.695) 0:03:13.244 ***** 2025-10-08 15:23:37.163830 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:23:37.163840 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:23:37.163851 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:23:37.163862 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:23:37.163872 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:23:37.163883 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:23:37.163893 | orchestrator | changed: [testbed-manager] 2025-10-08 15:23:37.163904 | orchestrator | 2025-10-08 15:23:37.163915 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2025-10-08 15:23:37.163926 | orchestrator | Wednesday 08 October 2025 15:23:31 +0000 (0:00:13.226) 0:03:26.472 ***** 2025-10-08 15:23:37.163972 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2025-10-08 15:23:37.163990 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2025-10-08 15:23:37.164027 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2025-10-08 15:23:37.164047 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2025-10-08 15:23:37.164068 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'network', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2025-10-08 15:23:37.164079 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2025-10-08 15:23:37.164090 | orchestrator | 2025-10-08 15:23:37.164101 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2025-10-08 15:23:37.164112 | orchestrator | Wednesday 08 October 2025 15:23:31 +0000 (0:00:00.463) 0:03:26.936 ***** 2025-10-08 15:23:37.164123 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-10-08 15:23:37.164133 | orchestrator | skipping: [testbed-manager] 2025-10-08 15:23:37.164144 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-10-08 15:23:37.164155 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:23:37.164165 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-10-08 15:23:37.164176 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:23:37.164186 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-10-08 15:23:37.164197 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:23:37.164207 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-10-08 15:23:37.164218 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-10-08 15:23:37.164229 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-10-08 15:23:37.164239 | orchestrator | 2025-10-08 15:23:37.164250 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2025-10-08 15:23:37.164260 | orchestrator | Wednesday 08 October 2025 15:23:32 +0000 (0:00:00.652) 0:03:27.589 ***** 2025-10-08 15:23:37.164270 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-10-08 15:23:37.164282 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-10-08 15:23:37.164293 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-10-08 15:23:37.164311 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-10-08 15:23:37.164322 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-10-08 15:23:37.164333 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-10-08 15:23:37.164344 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-10-08 15:23:37.164354 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-10-08 15:23:37.164365 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-10-08 15:23:37.164375 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-10-08 15:23:37.164386 | orchestrator | skipping: [testbed-manager] 2025-10-08 15:23:37.164396 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-10-08 15:23:37.164415 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-10-08 15:23:37.164425 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-10-08 15:23:37.164436 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-10-08 15:23:37.164446 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-10-08 15:23:37.164457 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-10-08 15:23:37.164467 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-10-08 15:23:37.164484 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-10-08 15:23:38.850327 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-10-08 15:23:38.850429 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-10-08 15:23:38.850444 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-10-08 15:23:38.850456 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-10-08 15:23:38.850467 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-10-08 15:23:38.850478 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-10-08 15:23:38.850489 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-10-08 15:23:38.850500 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-10-08 15:23:38.850512 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-10-08 15:23:38.850528 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-10-08 15:23:38.850539 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-10-08 15:23:38.850551 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:23:38.850563 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-10-08 15:23:38.850591 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:23:38.850603 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-10-08 15:23:38.850615 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-10-08 15:23:38.850626 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-10-08 15:23:38.850638 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-10-08 15:23:38.850649 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-10-08 15:23:38.850660 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-10-08 15:23:38.850672 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-10-08 15:23:38.850683 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-10-08 15:23:38.850694 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-10-08 15:23:38.850705 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-10-08 15:23:38.850717 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:23:38.850728 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-10-08 15:23:38.850761 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-10-08 15:23:38.850772 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-10-08 15:23:38.850783 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-10-08 15:23:38.850795 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-10-08 15:23:38.850806 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-10-08 15:23:38.850817 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-10-08 15:23:38.850829 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-10-08 15:23:38.850840 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-10-08 15:23:38.850851 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-10-08 15:23:38.850864 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-10-08 15:23:38.850877 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-10-08 15:23:38.850890 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-10-08 15:23:38.850902 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-10-08 15:23:38.850914 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-10-08 15:23:38.850926 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-10-08 15:23:38.850967 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-10-08 15:23:38.850979 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-10-08 15:23:38.851008 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-10-08 15:23:38.851021 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-10-08 15:23:38.851033 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-10-08 15:23:38.851046 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-10-08 15:23:38.851057 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-10-08 15:23:38.851069 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-10-08 15:23:38.851081 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-10-08 15:23:38.851092 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-10-08 15:23:38.851104 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-10-08 15:23:38.851115 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-10-08 15:23:38.851128 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-10-08 15:23:38.851139 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-10-08 15:23:38.851151 | orchestrator | 2025-10-08 15:23:38.851164 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2025-10-08 15:23:38.851182 | orchestrator | Wednesday 08 October 2025 15:23:37 +0000 (0:00:04.597) 0:03:32.186 ***** 2025-10-08 15:23:38.851194 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-10-08 15:23:38.851205 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-10-08 15:23:38.851223 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-10-08 15:23:38.851234 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-10-08 15:23:38.851245 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-10-08 15:23:38.851255 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-10-08 15:23:38.851265 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-10-08 15:23:38.851276 | orchestrator | 2025-10-08 15:23:38.851287 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2025-10-08 15:23:38.851297 | orchestrator | Wednesday 08 October 2025 15:23:37 +0000 (0:00:00.579) 0:03:32.765 ***** 2025-10-08 15:23:38.851308 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-10-08 15:23:38.851318 | orchestrator | skipping: [testbed-manager] 2025-10-08 15:23:38.851329 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-10-08 15:23:38.851339 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-10-08 15:23:38.851350 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:23:38.851361 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-10-08 15:23:38.851372 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:23:38.851382 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:23:38.851393 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-10-08 15:23:38.851403 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-10-08 15:23:38.851414 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-10-08 15:23:38.851424 | orchestrator | 2025-10-08 15:23:38.851435 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on network] ***************** 2025-10-08 15:23:38.851445 | orchestrator | Wednesday 08 October 2025 15:23:38 +0000 (0:00:00.621) 0:03:33.386 ***** 2025-10-08 15:23:38.851456 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-10-08 15:23:38.851466 | orchestrator | skipping: [testbed-manager] 2025-10-08 15:23:38.851477 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-10-08 15:23:38.851487 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:23:38.851498 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-10-08 15:23:38.851508 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:23:38.851519 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-10-08 15:23:38.851530 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:23:38.851540 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-10-08 15:23:38.851551 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-10-08 15:23:38.851561 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-10-08 15:23:38.851572 | orchestrator | 2025-10-08 15:23:38.851589 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2025-10-08 15:23:51.533803 | orchestrator | Wednesday 08 October 2025 15:23:38 +0000 (0:00:00.497) 0:03:33.884 ***** 2025-10-08 15:23:51.533899 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-10-08 15:23:51.533916 | orchestrator | skipping: [testbed-manager] 2025-10-08 15:23:51.533928 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-10-08 15:23:51.533993 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-10-08 15:23:51.534005 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:23:51.534058 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:23:51.534073 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-10-08 15:23:51.534084 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:23:51.534095 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-10-08 15:23:51.534106 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-10-08 15:23:51.534117 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-10-08 15:23:51.534127 | orchestrator | 2025-10-08 15:23:51.534140 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2025-10-08 15:23:51.534151 | orchestrator | Wednesday 08 October 2025 15:23:39 +0000 (0:00:00.663) 0:03:34.547 ***** 2025-10-08 15:23:51.534162 | orchestrator | skipping: [testbed-manager] 2025-10-08 15:23:51.534172 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:23:51.534183 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:23:51.534194 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:23:51.534205 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:23:51.534215 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:23:51.534226 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:23:51.534237 | orchestrator | 2025-10-08 15:23:51.534248 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2025-10-08 15:23:51.534259 | orchestrator | Wednesday 08 October 2025 15:23:39 +0000 (0:00:00.309) 0:03:34.857 ***** 2025-10-08 15:23:51.534269 | orchestrator | ok: [testbed-manager] 2025-10-08 15:23:51.534281 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:23:51.534292 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:23:51.534302 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:23:51.534313 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:23:51.534324 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:23:51.534335 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:23:51.534346 | orchestrator | 2025-10-08 15:23:51.534359 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2025-10-08 15:23:51.534372 | orchestrator | Wednesday 08 October 2025 15:23:45 +0000 (0:00:05.729) 0:03:40.586 ***** 2025-10-08 15:23:51.534385 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2025-10-08 15:23:51.534397 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2025-10-08 15:23:51.534409 | orchestrator | skipping: [testbed-manager] 2025-10-08 15:23:51.534421 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2025-10-08 15:23:51.534434 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:23:51.534445 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2025-10-08 15:23:51.534457 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:23:51.534469 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2025-10-08 15:23:51.534481 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:23:51.534493 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2025-10-08 15:23:51.534505 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:23:51.534517 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:23:51.534530 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2025-10-08 15:23:51.534541 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:23:51.534553 | orchestrator | 2025-10-08 15:23:51.534565 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2025-10-08 15:23:51.534578 | orchestrator | Wednesday 08 October 2025 15:23:45 +0000 (0:00:00.280) 0:03:40.867 ***** 2025-10-08 15:23:51.534590 | orchestrator | ok: [testbed-manager] => (item=cron) 2025-10-08 15:23:51.534602 | orchestrator | ok: [testbed-node-0] => (item=cron) 2025-10-08 15:23:51.534614 | orchestrator | ok: [testbed-node-1] => (item=cron) 2025-10-08 15:23:51.534634 | orchestrator | ok: [testbed-node-2] => (item=cron) 2025-10-08 15:23:51.534646 | orchestrator | ok: [testbed-node-3] => (item=cron) 2025-10-08 15:23:51.534658 | orchestrator | ok: [testbed-node-4] => (item=cron) 2025-10-08 15:23:51.534670 | orchestrator | ok: [testbed-node-5] => (item=cron) 2025-10-08 15:23:51.534682 | orchestrator | 2025-10-08 15:23:51.534694 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2025-10-08 15:23:51.534706 | orchestrator | Wednesday 08 October 2025 15:23:46 +0000 (0:00:01.065) 0:03:41.933 ***** 2025-10-08 15:23:51.534719 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-08 15:23:51.534733 | orchestrator | 2025-10-08 15:23:51.534744 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2025-10-08 15:23:51.534755 | orchestrator | Wednesday 08 October 2025 15:23:47 +0000 (0:00:00.551) 0:03:42.485 ***** 2025-10-08 15:23:51.534766 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:23:51.534777 | orchestrator | ok: [testbed-manager] 2025-10-08 15:23:51.534787 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:23:51.534798 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:23:51.534809 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:23:51.534819 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:23:51.534830 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:23:51.534840 | orchestrator | 2025-10-08 15:23:51.534851 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2025-10-08 15:23:51.534862 | orchestrator | Wednesday 08 October 2025 15:23:48 +0000 (0:00:01.204) 0:03:43.689 ***** 2025-10-08 15:23:51.534873 | orchestrator | ok: [testbed-manager] 2025-10-08 15:23:51.534901 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:23:51.534913 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:23:51.534923 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:23:51.534971 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:23:51.534982 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:23:51.534993 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:23:51.535004 | orchestrator | 2025-10-08 15:23:51.535015 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2025-10-08 15:23:51.535025 | orchestrator | Wednesday 08 October 2025 15:23:49 +0000 (0:00:00.662) 0:03:44.352 ***** 2025-10-08 15:23:51.535036 | orchestrator | changed: [testbed-manager] 2025-10-08 15:23:51.535047 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:23:51.535058 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:23:51.535069 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:23:51.535079 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:23:51.535090 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:23:51.535100 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:23:51.535111 | orchestrator | 2025-10-08 15:23:51.535121 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2025-10-08 15:23:51.535132 | orchestrator | Wednesday 08 October 2025 15:23:49 +0000 (0:00:00.656) 0:03:45.008 ***** 2025-10-08 15:23:51.535143 | orchestrator | ok: [testbed-manager] 2025-10-08 15:23:51.535153 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:23:51.535178 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:23:51.535190 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:23:51.535201 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:23:51.535211 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:23:51.535222 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:23:51.535232 | orchestrator | 2025-10-08 15:23:51.535243 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2025-10-08 15:23:51.535254 | orchestrator | Wednesday 08 October 2025 15:23:50 +0000 (0:00:00.588) 0:03:45.597 ***** 2025-10-08 15:23:51.535274 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1759935450.700414, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-10-08 15:23:51.535297 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1759935484.2793553, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-10-08 15:23:51.535309 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1759935480.459338, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-10-08 15:23:51.535320 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1759935494.910247, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-10-08 15:23:51.535332 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1759935493.5578594, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-10-08 15:23:51.535361 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1759935496.0424826, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-10-08 15:24:06.260926 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1759935483.5828707, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-10-08 15:24:06.261107 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-10-08 15:24:06.261155 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-10-08 15:24:06.261168 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-10-08 15:24:06.261180 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-10-08 15:24:06.261191 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-10-08 15:24:06.261203 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-10-08 15:24:06.261243 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-10-08 15:24:06.261257 | orchestrator | 2025-10-08 15:24:06.261271 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2025-10-08 15:24:06.261283 | orchestrator | Wednesday 08 October 2025 15:23:51 +0000 (0:00:00.967) 0:03:46.564 ***** 2025-10-08 15:24:06.261305 | orchestrator | changed: [testbed-manager] 2025-10-08 15:24:06.261317 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:24:06.261328 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:24:06.261338 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:24:06.261349 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:24:06.261360 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:24:06.261370 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:24:06.261381 | orchestrator | 2025-10-08 15:24:06.261397 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2025-10-08 15:24:06.261409 | orchestrator | Wednesday 08 October 2025 15:23:52 +0000 (0:00:01.147) 0:03:47.712 ***** 2025-10-08 15:24:06.261420 | orchestrator | changed: [testbed-manager] 2025-10-08 15:24:06.261430 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:24:06.261441 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:24:06.261452 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:24:06.261464 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:24:06.261477 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:24:06.261489 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:24:06.261501 | orchestrator | 2025-10-08 15:24:06.261513 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2025-10-08 15:24:06.261525 | orchestrator | Wednesday 08 October 2025 15:23:53 +0000 (0:00:01.145) 0:03:48.858 ***** 2025-10-08 15:24:06.261537 | orchestrator | changed: [testbed-manager] 2025-10-08 15:24:06.261548 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:24:06.261560 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:24:06.261574 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:24:06.261586 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:24:06.261598 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:24:06.261610 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:24:06.261622 | orchestrator | 2025-10-08 15:24:06.261634 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2025-10-08 15:24:06.261646 | orchestrator | Wednesday 08 October 2025 15:23:54 +0000 (0:00:01.137) 0:03:49.996 ***** 2025-10-08 15:24:06.261658 | orchestrator | skipping: [testbed-manager] 2025-10-08 15:24:06.261671 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:24:06.261683 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:24:06.261695 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:24:06.261707 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:24:06.261719 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:24:06.261731 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:24:06.261743 | orchestrator | 2025-10-08 15:24:06.261755 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2025-10-08 15:24:06.261768 | orchestrator | Wednesday 08 October 2025 15:23:55 +0000 (0:00:00.321) 0:03:50.317 ***** 2025-10-08 15:24:06.261780 | orchestrator | ok: [testbed-manager] 2025-10-08 15:24:06.261793 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:24:06.261805 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:24:06.261817 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:24:06.261828 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:24:06.261838 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:24:06.261849 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:24:06.261859 | orchestrator | 2025-10-08 15:24:06.261870 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2025-10-08 15:24:06.261881 | orchestrator | Wednesday 08 October 2025 15:23:56 +0000 (0:00:00.777) 0:03:51.094 ***** 2025-10-08 15:24:06.261894 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-08 15:24:06.261922 | orchestrator | 2025-10-08 15:24:06.261954 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2025-10-08 15:24:06.261977 | orchestrator | Wednesday 08 October 2025 15:23:56 +0000 (0:00:00.406) 0:03:51.501 ***** 2025-10-08 15:24:06.261996 | orchestrator | ok: [testbed-manager] 2025-10-08 15:24:06.262006 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:24:06.262073 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:24:06.262087 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:24:06.262098 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:24:06.262109 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:24:06.262120 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:24:06.262130 | orchestrator | 2025-10-08 15:24:06.262141 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2025-10-08 15:24:06.262152 | orchestrator | Wednesday 08 October 2025 15:24:03 +0000 (0:00:07.453) 0:03:58.954 ***** 2025-10-08 15:24:06.262163 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:24:06.262174 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:24:06.262184 | orchestrator | ok: [testbed-manager] 2025-10-08 15:24:06.262195 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:24:06.262206 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:24:06.262216 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:24:06.262227 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:24:06.262238 | orchestrator | 2025-10-08 15:24:06.262249 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2025-10-08 15:24:06.262260 | orchestrator | Wednesday 08 October 2025 15:24:05 +0000 (0:00:01.199) 0:04:00.153 ***** 2025-10-08 15:24:06.262270 | orchestrator | ok: [testbed-manager] 2025-10-08 15:24:06.262281 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:24:06.262292 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:24:06.262302 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:24:06.262313 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:24:06.262323 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:24:06.262334 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:24:06.262345 | orchestrator | 2025-10-08 15:24:06.262364 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2025-10-08 15:25:14.025381 | orchestrator | Wednesday 08 October 2025 15:24:06 +0000 (0:00:01.138) 0:04:01.292 ***** 2025-10-08 15:25:14.025491 | orchestrator | ok: [testbed-manager] 2025-10-08 15:25:14.025508 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:25:14.025520 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:25:14.025529 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:25:14.025539 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:25:14.025548 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:25:14.025558 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:25:14.025567 | orchestrator | 2025-10-08 15:25:14.025578 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2025-10-08 15:25:14.025589 | orchestrator | Wednesday 08 October 2025 15:24:06 +0000 (0:00:00.305) 0:04:01.598 ***** 2025-10-08 15:25:14.025599 | orchestrator | ok: [testbed-manager] 2025-10-08 15:25:14.025608 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:25:14.025618 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:25:14.025627 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:25:14.025636 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:25:14.025645 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:25:14.025655 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:25:14.025664 | orchestrator | 2025-10-08 15:25:14.025674 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2025-10-08 15:25:14.025684 | orchestrator | Wednesday 08 October 2025 15:24:06 +0000 (0:00:00.321) 0:04:01.919 ***** 2025-10-08 15:25:14.025693 | orchestrator | ok: [testbed-manager] 2025-10-08 15:25:14.025703 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:25:14.025712 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:25:14.025722 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:25:14.025732 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:25:14.025741 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:25:14.025751 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:25:14.025760 | orchestrator | 2025-10-08 15:25:14.025770 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2025-10-08 15:25:14.025779 | orchestrator | Wednesday 08 October 2025 15:24:07 +0000 (0:00:00.283) 0:04:02.202 ***** 2025-10-08 15:25:14.025814 | orchestrator | ok: [testbed-manager] 2025-10-08 15:25:14.025824 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:25:14.025833 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:25:14.025842 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:25:14.025852 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:25:14.025861 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:25:14.025870 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:25:14.025880 | orchestrator | 2025-10-08 15:25:14.025889 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2025-10-08 15:25:14.025899 | orchestrator | Wednesday 08 October 2025 15:24:12 +0000 (0:00:05.689) 0:04:07.892 ***** 2025-10-08 15:25:14.025910 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-08 15:25:14.025923 | orchestrator | 2025-10-08 15:25:14.025934 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2025-10-08 15:25:14.025970 | orchestrator | Wednesday 08 October 2025 15:24:13 +0000 (0:00:00.427) 0:04:08.319 ***** 2025-10-08 15:25:14.025982 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2025-10-08 15:25:14.025992 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2025-10-08 15:25:14.026003 | orchestrator | skipping: [testbed-manager] 2025-10-08 15:25:14.026065 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2025-10-08 15:25:14.026078 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2025-10-08 15:25:14.026090 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2025-10-08 15:25:14.026101 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:25:14.026110 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2025-10-08 15:25:14.026120 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2025-10-08 15:25:14.026129 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2025-10-08 15:25:14.026139 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:25:14.026148 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:25:14.026158 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2025-10-08 15:25:14.026167 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2025-10-08 15:25:14.026176 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2025-10-08 15:25:14.026186 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2025-10-08 15:25:14.026195 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:25:14.026204 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:25:14.026214 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2025-10-08 15:25:14.026223 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2025-10-08 15:25:14.026233 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:25:14.026242 | orchestrator | 2025-10-08 15:25:14.026252 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2025-10-08 15:25:14.026261 | orchestrator | Wednesday 08 October 2025 15:24:13 +0000 (0:00:00.330) 0:04:08.650 ***** 2025-10-08 15:25:14.026272 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-08 15:25:14.026282 | orchestrator | 2025-10-08 15:25:14.026291 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2025-10-08 15:25:14.026301 | orchestrator | Wednesday 08 October 2025 15:24:14 +0000 (0:00:00.406) 0:04:09.057 ***** 2025-10-08 15:25:14.026310 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2025-10-08 15:25:14.026320 | orchestrator | skipping: [testbed-manager] 2025-10-08 15:25:14.026329 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2025-10-08 15:25:14.026339 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2025-10-08 15:25:14.026356 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:25:14.026382 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2025-10-08 15:25:14.026393 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:25:14.026402 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2025-10-08 15:25:14.026412 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:25:14.026421 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:25:14.026430 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2025-10-08 15:25:14.026440 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:25:14.026450 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2025-10-08 15:25:14.026476 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:25:14.026486 | orchestrator | 2025-10-08 15:25:14.026496 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2025-10-08 15:25:14.026505 | orchestrator | Wednesday 08 October 2025 15:24:14 +0000 (0:00:00.333) 0:04:09.390 ***** 2025-10-08 15:25:14.026519 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-08 15:25:14.026529 | orchestrator | 2025-10-08 15:25:14.026538 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2025-10-08 15:25:14.026548 | orchestrator | Wednesday 08 October 2025 15:24:14 +0000 (0:00:00.411) 0:04:09.802 ***** 2025-10-08 15:25:14.026557 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:25:14.026567 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:25:14.026576 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:25:14.026586 | orchestrator | changed: [testbed-manager] 2025-10-08 15:25:14.026595 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:25:14.026605 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:25:14.026614 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:25:14.026623 | orchestrator | 2025-10-08 15:25:14.026633 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2025-10-08 15:25:14.026642 | orchestrator | Wednesday 08 October 2025 15:24:48 +0000 (0:00:33.848) 0:04:43.651 ***** 2025-10-08 15:25:14.026652 | orchestrator | changed: [testbed-manager] 2025-10-08 15:25:14.026661 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:25:14.026671 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:25:14.026680 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:25:14.026690 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:25:14.026699 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:25:14.026708 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:25:14.026718 | orchestrator | 2025-10-08 15:25:14.026727 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2025-10-08 15:25:14.026737 | orchestrator | Wednesday 08 October 2025 15:24:56 +0000 (0:00:07.846) 0:04:51.497 ***** 2025-10-08 15:25:14.026746 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:25:14.026755 | orchestrator | changed: [testbed-manager] 2025-10-08 15:25:14.026765 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:25:14.026774 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:25:14.026783 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:25:14.026793 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:25:14.026802 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:25:14.026811 | orchestrator | 2025-10-08 15:25:14.026821 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2025-10-08 15:25:14.026830 | orchestrator | Wednesday 08 October 2025 15:25:03 +0000 (0:00:07.378) 0:04:58.876 ***** 2025-10-08 15:25:14.026840 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:25:14.026849 | orchestrator | ok: [testbed-manager] 2025-10-08 15:25:14.026859 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:25:14.026868 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:25:14.026877 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:25:14.026893 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:25:14.026902 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:25:14.026911 | orchestrator | 2025-10-08 15:25:14.026921 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2025-10-08 15:25:14.026931 | orchestrator | Wednesday 08 October 2025 15:25:05 +0000 (0:00:01.685) 0:05:00.562 ***** 2025-10-08 15:25:14.026940 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:25:14.026979 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:25:14.026988 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:25:14.026998 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:25:14.027007 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:25:14.027017 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:25:14.027026 | orchestrator | changed: [testbed-manager] 2025-10-08 15:25:14.027035 | orchestrator | 2025-10-08 15:25:14.027045 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2025-10-08 15:25:14.027054 | orchestrator | Wednesday 08 October 2025 15:25:11 +0000 (0:00:05.609) 0:05:06.172 ***** 2025-10-08 15:25:14.027065 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-08 15:25:14.027076 | orchestrator | 2025-10-08 15:25:14.027086 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2025-10-08 15:25:14.027095 | orchestrator | Wednesday 08 October 2025 15:25:11 +0000 (0:00:00.565) 0:05:06.738 ***** 2025-10-08 15:25:14.027105 | orchestrator | changed: [testbed-manager] 2025-10-08 15:25:14.027114 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:25:14.027124 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:25:14.027133 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:25:14.027142 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:25:14.027152 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:25:14.027161 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:25:14.027171 | orchestrator | 2025-10-08 15:25:14.027180 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2025-10-08 15:25:14.027190 | orchestrator | Wednesday 08 October 2025 15:25:12 +0000 (0:00:00.716) 0:05:07.454 ***** 2025-10-08 15:25:14.027199 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:25:14.027209 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:25:14.027218 | orchestrator | ok: [testbed-manager] 2025-10-08 15:25:14.027228 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:25:14.027243 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:25:28.935299 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:25:28.935414 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:25:28.935429 | orchestrator | 2025-10-08 15:25:28.935442 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2025-10-08 15:25:28.935454 | orchestrator | Wednesday 08 October 2025 15:25:14 +0000 (0:00:01.604) 0:05:09.059 ***** 2025-10-08 15:25:28.935465 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:25:28.935476 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:25:28.935487 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:25:28.935498 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:25:28.935508 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:25:28.935519 | orchestrator | changed: [testbed-manager] 2025-10-08 15:25:28.935529 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:25:28.935540 | orchestrator | 2025-10-08 15:25:28.935551 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2025-10-08 15:25:28.935561 | orchestrator | Wednesday 08 October 2025 15:25:14 +0000 (0:00:00.838) 0:05:09.897 ***** 2025-10-08 15:25:28.935572 | orchestrator | skipping: [testbed-manager] 2025-10-08 15:25:28.935582 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:25:28.935609 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:25:28.935620 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:25:28.935631 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:25:28.935642 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:25:28.935672 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:25:28.935683 | orchestrator | 2025-10-08 15:25:28.935694 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2025-10-08 15:25:28.935704 | orchestrator | Wednesday 08 October 2025 15:25:15 +0000 (0:00:00.381) 0:05:10.279 ***** 2025-10-08 15:25:28.935715 | orchestrator | skipping: [testbed-manager] 2025-10-08 15:25:28.935725 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:25:28.935736 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:25:28.935746 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:25:28.935757 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:25:28.935768 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:25:28.935778 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:25:28.935789 | orchestrator | 2025-10-08 15:25:28.935799 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2025-10-08 15:25:28.935810 | orchestrator | Wednesday 08 October 2025 15:25:15 +0000 (0:00:00.449) 0:05:10.730 ***** 2025-10-08 15:25:28.935820 | orchestrator | ok: [testbed-manager] 2025-10-08 15:25:28.935833 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:25:28.935844 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:25:28.935857 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:25:28.935869 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:25:28.935881 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:25:28.935892 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:25:28.935904 | orchestrator | 2025-10-08 15:25:28.935916 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2025-10-08 15:25:28.935928 | orchestrator | Wednesday 08 October 2025 15:25:15 +0000 (0:00:00.299) 0:05:11.030 ***** 2025-10-08 15:25:28.935940 | orchestrator | skipping: [testbed-manager] 2025-10-08 15:25:28.935977 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:25:28.935990 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:25:28.936001 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:25:28.936013 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:25:28.936025 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:25:28.936037 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:25:28.936048 | orchestrator | 2025-10-08 15:25:28.936059 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2025-10-08 15:25:28.936072 | orchestrator | Wednesday 08 October 2025 15:25:16 +0000 (0:00:00.309) 0:05:11.340 ***** 2025-10-08 15:25:28.936085 | orchestrator | ok: [testbed-manager] 2025-10-08 15:25:28.936096 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:25:28.936108 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:25:28.936120 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:25:28.936131 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:25:28.936143 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:25:28.936154 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:25:28.936165 | orchestrator | 2025-10-08 15:25:28.936177 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2025-10-08 15:25:28.936189 | orchestrator | Wednesday 08 October 2025 15:25:16 +0000 (0:00:00.304) 0:05:11.644 ***** 2025-10-08 15:25:28.936200 | orchestrator | ok: [testbed-manager] =>  2025-10-08 15:25:28.936210 | orchestrator |  docker_version: 5:27.5.1 2025-10-08 15:25:28.936221 | orchestrator | ok: [testbed-node-0] =>  2025-10-08 15:25:28.936231 | orchestrator |  docker_version: 5:27.5.1 2025-10-08 15:25:28.936241 | orchestrator | ok: [testbed-node-1] =>  2025-10-08 15:25:28.936252 | orchestrator |  docker_version: 5:27.5.1 2025-10-08 15:25:28.936262 | orchestrator | ok: [testbed-node-2] =>  2025-10-08 15:25:28.936273 | orchestrator |  docker_version: 5:27.5.1 2025-10-08 15:25:28.936283 | orchestrator | ok: [testbed-node-3] =>  2025-10-08 15:25:28.936293 | orchestrator |  docker_version: 5:27.5.1 2025-10-08 15:25:28.936303 | orchestrator | ok: [testbed-node-4] =>  2025-10-08 15:25:28.936314 | orchestrator |  docker_version: 5:27.5.1 2025-10-08 15:25:28.936324 | orchestrator | ok: [testbed-node-5] =>  2025-10-08 15:25:28.936334 | orchestrator |  docker_version: 5:27.5.1 2025-10-08 15:25:28.936345 | orchestrator | 2025-10-08 15:25:28.936363 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2025-10-08 15:25:28.936374 | orchestrator | Wednesday 08 October 2025 15:25:16 +0000 (0:00:00.280) 0:05:11.925 ***** 2025-10-08 15:25:28.936385 | orchestrator | ok: [testbed-manager] =>  2025-10-08 15:25:28.936395 | orchestrator |  docker_cli_version: 5:27.5.1 2025-10-08 15:25:28.936406 | orchestrator | ok: [testbed-node-0] =>  2025-10-08 15:25:28.936416 | orchestrator |  docker_cli_version: 5:27.5.1 2025-10-08 15:25:28.936426 | orchestrator | ok: [testbed-node-1] =>  2025-10-08 15:25:28.936437 | orchestrator |  docker_cli_version: 5:27.5.1 2025-10-08 15:25:28.936447 | orchestrator | ok: [testbed-node-2] =>  2025-10-08 15:25:28.936458 | orchestrator |  docker_cli_version: 5:27.5.1 2025-10-08 15:25:28.936468 | orchestrator | ok: [testbed-node-3] =>  2025-10-08 15:25:28.936478 | orchestrator |  docker_cli_version: 5:27.5.1 2025-10-08 15:25:28.936489 | orchestrator | ok: [testbed-node-4] =>  2025-10-08 15:25:28.936499 | orchestrator |  docker_cli_version: 5:27.5.1 2025-10-08 15:25:28.936509 | orchestrator | ok: [testbed-node-5] =>  2025-10-08 15:25:28.936520 | orchestrator |  docker_cli_version: 5:27.5.1 2025-10-08 15:25:28.936530 | orchestrator | 2025-10-08 15:25:28.936541 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2025-10-08 15:25:28.936571 | orchestrator | Wednesday 08 October 2025 15:25:17 +0000 (0:00:00.307) 0:05:12.232 ***** 2025-10-08 15:25:28.936582 | orchestrator | skipping: [testbed-manager] 2025-10-08 15:25:28.936593 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:25:28.936603 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:25:28.936614 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:25:28.936624 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:25:28.936635 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:25:28.936645 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:25:28.936656 | orchestrator | 2025-10-08 15:25:28.936667 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2025-10-08 15:25:28.936677 | orchestrator | Wednesday 08 October 2025 15:25:17 +0000 (0:00:00.275) 0:05:12.507 ***** 2025-10-08 15:25:28.936688 | orchestrator | skipping: [testbed-manager] 2025-10-08 15:25:28.936698 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:25:28.936709 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:25:28.936719 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:25:28.936730 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:25:28.936740 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:25:28.936751 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:25:28.936761 | orchestrator | 2025-10-08 15:25:28.936777 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2025-10-08 15:25:28.936789 | orchestrator | Wednesday 08 October 2025 15:25:17 +0000 (0:00:00.291) 0:05:12.798 ***** 2025-10-08 15:25:28.936802 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-08 15:25:28.936815 | orchestrator | 2025-10-08 15:25:28.936826 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2025-10-08 15:25:28.936836 | orchestrator | Wednesday 08 October 2025 15:25:18 +0000 (0:00:00.427) 0:05:13.226 ***** 2025-10-08 15:25:28.936847 | orchestrator | ok: [testbed-manager] 2025-10-08 15:25:28.936858 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:25:28.936868 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:25:28.936879 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:25:28.936889 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:25:28.936900 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:25:28.936910 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:25:28.936920 | orchestrator | 2025-10-08 15:25:28.936931 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2025-10-08 15:25:28.936942 | orchestrator | Wednesday 08 October 2025 15:25:19 +0000 (0:00:00.991) 0:05:14.217 ***** 2025-10-08 15:25:28.936968 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:25:28.936986 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:25:28.936997 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:25:28.937007 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:25:28.937017 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:25:28.937028 | orchestrator | ok: [testbed-manager] 2025-10-08 15:25:28.937038 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:25:28.937049 | orchestrator | 2025-10-08 15:25:28.937060 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2025-10-08 15:25:28.937071 | orchestrator | Wednesday 08 October 2025 15:25:22 +0000 (0:00:02.968) 0:05:17.185 ***** 2025-10-08 15:25:28.937082 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2025-10-08 15:25:28.937093 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2025-10-08 15:25:28.937104 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2025-10-08 15:25:28.937114 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2025-10-08 15:25:28.937125 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2025-10-08 15:25:28.937135 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2025-10-08 15:25:28.937146 | orchestrator | skipping: [testbed-manager] 2025-10-08 15:25:28.937156 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2025-10-08 15:25:28.937167 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2025-10-08 15:25:28.937177 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2025-10-08 15:25:28.937187 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:25:28.937198 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2025-10-08 15:25:28.937208 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2025-10-08 15:25:28.937219 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2025-10-08 15:25:28.937229 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:25:28.937240 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2025-10-08 15:25:28.937250 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2025-10-08 15:25:28.937261 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2025-10-08 15:25:28.937271 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:25:28.937281 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2025-10-08 15:25:28.937292 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2025-10-08 15:25:28.937302 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2025-10-08 15:25:28.937313 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:25:28.937324 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:25:28.937334 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2025-10-08 15:25:28.937345 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2025-10-08 15:25:28.937355 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2025-10-08 15:25:28.937366 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:25:28.937376 | orchestrator | 2025-10-08 15:25:28.937387 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2025-10-08 15:25:28.937397 | orchestrator | Wednesday 08 October 2025 15:25:22 +0000 (0:00:00.615) 0:05:17.801 ***** 2025-10-08 15:25:28.937408 | orchestrator | ok: [testbed-manager] 2025-10-08 15:25:28.937418 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:25:28.937429 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:25:28.937439 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:25:28.937450 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:25:28.937460 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:25:28.937470 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:25:28.937481 | orchestrator | 2025-10-08 15:25:28.937498 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2025-10-08 15:26:21.598306 | orchestrator | Wednesday 08 October 2025 15:25:28 +0000 (0:00:06.161) 0:05:23.962 ***** 2025-10-08 15:26:21.598422 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:26:21.598439 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:26:21.598481 | orchestrator | ok: [testbed-manager] 2025-10-08 15:26:21.598494 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:26:21.598505 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:26:21.598516 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:26:21.598526 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:26:21.598537 | orchestrator | 2025-10-08 15:26:21.598549 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2025-10-08 15:26:21.598560 | orchestrator | Wednesday 08 October 2025 15:25:29 +0000 (0:00:01.045) 0:05:25.007 ***** 2025-10-08 15:26:21.598571 | orchestrator | ok: [testbed-manager] 2025-10-08 15:26:21.598582 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:26:21.598592 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:26:21.598604 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:26:21.598615 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:26:21.598640 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:26:21.598651 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:26:21.598661 | orchestrator | 2025-10-08 15:26:21.598672 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2025-10-08 15:26:21.598683 | orchestrator | Wednesday 08 October 2025 15:25:37 +0000 (0:00:07.774) 0:05:32.781 ***** 2025-10-08 15:26:21.598694 | orchestrator | changed: [testbed-manager] 2025-10-08 15:26:21.598704 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:26:21.598715 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:26:21.598725 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:26:21.598736 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:26:21.598746 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:26:21.598757 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:26:21.598767 | orchestrator | 2025-10-08 15:26:21.598778 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2025-10-08 15:26:21.598789 | orchestrator | Wednesday 08 October 2025 15:25:40 +0000 (0:00:03.239) 0:05:36.020 ***** 2025-10-08 15:26:21.598799 | orchestrator | ok: [testbed-manager] 2025-10-08 15:26:21.598810 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:26:21.598820 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:26:21.598831 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:26:21.598844 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:26:21.598855 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:26:21.598867 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:26:21.598879 | orchestrator | 2025-10-08 15:26:21.598891 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2025-10-08 15:26:21.598903 | orchestrator | Wednesday 08 October 2025 15:25:42 +0000 (0:00:01.326) 0:05:37.347 ***** 2025-10-08 15:26:21.598915 | orchestrator | ok: [testbed-manager] 2025-10-08 15:26:21.598927 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:26:21.598939 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:26:21.598951 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:26:21.598991 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:26:21.599004 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:26:21.599015 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:26:21.599027 | orchestrator | 2025-10-08 15:26:21.599039 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2025-10-08 15:26:21.599052 | orchestrator | Wednesday 08 October 2025 15:25:43 +0000 (0:00:01.524) 0:05:38.872 ***** 2025-10-08 15:26:21.599064 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:26:21.599076 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:26:21.599088 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:26:21.599100 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:26:21.599112 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:26:21.599124 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:26:21.599136 | orchestrator | changed: [testbed-manager] 2025-10-08 15:26:21.599148 | orchestrator | 2025-10-08 15:26:21.599160 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2025-10-08 15:26:21.599173 | orchestrator | Wednesday 08 October 2025 15:25:44 +0000 (0:00:00.651) 0:05:39.524 ***** 2025-10-08 15:26:21.599193 | orchestrator | ok: [testbed-manager] 2025-10-08 15:26:21.599204 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:26:21.599214 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:26:21.599225 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:26:21.599235 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:26:21.599246 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:26:21.599257 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:26:21.599267 | orchestrator | 2025-10-08 15:26:21.599278 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2025-10-08 15:26:21.599289 | orchestrator | Wednesday 08 October 2025 15:25:54 +0000 (0:00:09.637) 0:05:49.161 ***** 2025-10-08 15:26:21.599299 | orchestrator | changed: [testbed-manager] 2025-10-08 15:26:21.599310 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:26:21.599320 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:26:21.599331 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:26:21.599341 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:26:21.599352 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:26:21.599362 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:26:21.599373 | orchestrator | 2025-10-08 15:26:21.599383 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2025-10-08 15:26:21.599394 | orchestrator | Wednesday 08 October 2025 15:25:55 +0000 (0:00:00.905) 0:05:50.067 ***** 2025-10-08 15:26:21.599405 | orchestrator | ok: [testbed-manager] 2025-10-08 15:26:21.599415 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:26:21.599426 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:26:21.599436 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:26:21.599447 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:26:21.599457 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:26:21.599468 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:26:21.599478 | orchestrator | 2025-10-08 15:26:21.599489 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2025-10-08 15:26:21.599499 | orchestrator | Wednesday 08 October 2025 15:26:03 +0000 (0:00:08.617) 0:05:58.684 ***** 2025-10-08 15:26:21.599510 | orchestrator | ok: [testbed-manager] 2025-10-08 15:26:21.599520 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:26:21.599531 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:26:21.599542 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:26:21.599552 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:26:21.599562 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:26:21.599591 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:26:21.599602 | orchestrator | 2025-10-08 15:26:21.599613 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2025-10-08 15:26:21.599624 | orchestrator | Wednesday 08 October 2025 15:26:14 +0000 (0:00:10.673) 0:06:09.357 ***** 2025-10-08 15:26:21.599635 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2025-10-08 15:26:21.599646 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2025-10-08 15:26:21.599656 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2025-10-08 15:26:21.599667 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2025-10-08 15:26:21.599677 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2025-10-08 15:26:21.599688 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2025-10-08 15:26:21.599698 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2025-10-08 15:26:21.599709 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2025-10-08 15:26:21.599720 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2025-10-08 15:26:21.599730 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2025-10-08 15:26:21.599741 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2025-10-08 15:26:21.599751 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2025-10-08 15:26:21.599762 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2025-10-08 15:26:21.599773 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2025-10-08 15:26:21.599790 | orchestrator | 2025-10-08 15:26:21.599801 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2025-10-08 15:26:21.599812 | orchestrator | Wednesday 08 October 2025 15:26:15 +0000 (0:00:01.171) 0:06:10.529 ***** 2025-10-08 15:26:21.599823 | orchestrator | skipping: [testbed-manager] 2025-10-08 15:26:21.599833 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:26:21.599844 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:26:21.599854 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:26:21.599865 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:26:21.599875 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:26:21.599886 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:26:21.599897 | orchestrator | 2025-10-08 15:26:21.599907 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2025-10-08 15:26:21.599918 | orchestrator | Wednesday 08 October 2025 15:26:15 +0000 (0:00:00.507) 0:06:11.036 ***** 2025-10-08 15:26:21.599928 | orchestrator | ok: [testbed-manager] 2025-10-08 15:26:21.599939 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:26:21.599950 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:26:21.599976 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:26:21.599987 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:26:21.599998 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:26:21.600008 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:26:21.600019 | orchestrator | 2025-10-08 15:26:21.600029 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2025-10-08 15:26:21.600041 | orchestrator | Wednesday 08 October 2025 15:26:19 +0000 (0:00:03.846) 0:06:14.882 ***** 2025-10-08 15:26:21.600052 | orchestrator | skipping: [testbed-manager] 2025-10-08 15:26:21.600063 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:26:21.600073 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:26:21.600084 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:26:21.600094 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:26:21.600105 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:26:21.600115 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:26:21.600126 | orchestrator | 2025-10-08 15:26:21.600137 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2025-10-08 15:26:21.600148 | orchestrator | Wednesday 08 October 2025 15:26:20 +0000 (0:00:00.487) 0:06:15.370 ***** 2025-10-08 15:26:21.600198 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2025-10-08 15:26:21.600210 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2025-10-08 15:26:21.600221 | orchestrator | skipping: [testbed-manager] 2025-10-08 15:26:21.600232 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2025-10-08 15:26:21.600242 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2025-10-08 15:26:21.600253 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:26:21.600263 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2025-10-08 15:26:21.600274 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2025-10-08 15:26:21.600284 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:26:21.600295 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2025-10-08 15:26:21.600305 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2025-10-08 15:26:21.600315 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:26:21.600326 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2025-10-08 15:26:21.600337 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2025-10-08 15:26:21.600347 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:26:21.600358 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2025-10-08 15:26:21.600368 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2025-10-08 15:26:21.600378 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:26:21.600389 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2025-10-08 15:26:21.600399 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2025-10-08 15:26:21.600417 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:26:21.600428 | orchestrator | 2025-10-08 15:26:21.600438 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2025-10-08 15:26:21.600449 | orchestrator | Wednesday 08 October 2025 15:26:21 +0000 (0:00:00.733) 0:06:16.103 ***** 2025-10-08 15:26:21.600460 | orchestrator | skipping: [testbed-manager] 2025-10-08 15:26:21.600470 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:26:21.600481 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:26:21.600491 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:26:21.600502 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:26:21.600513 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:26:21.600523 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:26:21.600534 | orchestrator | 2025-10-08 15:26:21.600551 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2025-10-08 15:26:42.895610 | orchestrator | Wednesday 08 October 2025 15:26:21 +0000 (0:00:00.529) 0:06:16.632 ***** 2025-10-08 15:26:42.895726 | orchestrator | skipping: [testbed-manager] 2025-10-08 15:26:42.895743 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:26:42.895754 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:26:42.895766 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:26:42.895777 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:26:42.895787 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:26:42.895798 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:26:42.895809 | orchestrator | 2025-10-08 15:26:42.895821 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2025-10-08 15:26:42.895832 | orchestrator | Wednesday 08 October 2025 15:26:22 +0000 (0:00:00.497) 0:06:17.129 ***** 2025-10-08 15:26:42.895843 | orchestrator | skipping: [testbed-manager] 2025-10-08 15:26:42.895854 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:26:42.895864 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:26:42.895891 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:26:42.895903 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:26:42.895914 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:26:42.895924 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:26:42.895935 | orchestrator | 2025-10-08 15:26:42.895947 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2025-10-08 15:26:42.895957 | orchestrator | Wednesday 08 October 2025 15:26:22 +0000 (0:00:00.522) 0:06:17.652 ***** 2025-10-08 15:26:42.895996 | orchestrator | ok: [testbed-manager] 2025-10-08 15:26:42.896009 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:26:42.896020 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:26:42.896031 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:26:42.896041 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:26:42.896052 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:26:42.896063 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:26:42.896074 | orchestrator | 2025-10-08 15:26:42.896085 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2025-10-08 15:26:42.896096 | orchestrator | Wednesday 08 October 2025 15:26:24 +0000 (0:00:01.866) 0:06:19.518 ***** 2025-10-08 15:26:42.896108 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-08 15:26:42.896122 | orchestrator | 2025-10-08 15:26:42.896133 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2025-10-08 15:26:42.896144 | orchestrator | Wednesday 08 October 2025 15:26:25 +0000 (0:00:00.902) 0:06:20.421 ***** 2025-10-08 15:26:42.896154 | orchestrator | ok: [testbed-manager] 2025-10-08 15:26:42.896165 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:26:42.896176 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:26:42.896186 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:26:42.896197 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:26:42.896208 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:26:42.896243 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:26:42.896255 | orchestrator | 2025-10-08 15:26:42.896265 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2025-10-08 15:26:42.896276 | orchestrator | Wednesday 08 October 2025 15:26:26 +0000 (0:00:00.854) 0:06:21.275 ***** 2025-10-08 15:26:42.896287 | orchestrator | ok: [testbed-manager] 2025-10-08 15:26:42.896297 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:26:42.896308 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:26:42.896318 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:26:42.896329 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:26:42.896339 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:26:42.896350 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:26:42.896360 | orchestrator | 2025-10-08 15:26:42.896371 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2025-10-08 15:26:42.896381 | orchestrator | Wednesday 08 October 2025 15:26:27 +0000 (0:00:00.854) 0:06:22.130 ***** 2025-10-08 15:26:42.896392 | orchestrator | ok: [testbed-manager] 2025-10-08 15:26:42.896402 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:26:42.896413 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:26:42.896423 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:26:42.896434 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:26:42.896444 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:26:42.896454 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:26:42.896465 | orchestrator | 2025-10-08 15:26:42.896476 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2025-10-08 15:26:42.896487 | orchestrator | Wednesday 08 October 2025 15:26:28 +0000 (0:00:01.639) 0:06:23.769 ***** 2025-10-08 15:26:42.896497 | orchestrator | skipping: [testbed-manager] 2025-10-08 15:26:42.896508 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:26:42.896519 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:26:42.896529 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:26:42.896540 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:26:42.896550 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:26:42.896560 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:26:42.896571 | orchestrator | 2025-10-08 15:26:42.896582 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2025-10-08 15:26:42.896592 | orchestrator | Wednesday 08 October 2025 15:26:30 +0000 (0:00:01.412) 0:06:25.182 ***** 2025-10-08 15:26:42.896603 | orchestrator | ok: [testbed-manager] 2025-10-08 15:26:42.896613 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:26:42.896624 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:26:42.896634 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:26:42.896645 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:26:42.896655 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:26:42.896666 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:26:42.896676 | orchestrator | 2025-10-08 15:26:42.896687 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2025-10-08 15:26:42.896697 | orchestrator | Wednesday 08 October 2025 15:26:31 +0000 (0:00:01.376) 0:06:26.558 ***** 2025-10-08 15:26:42.896708 | orchestrator | changed: [testbed-manager] 2025-10-08 15:26:42.896719 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:26:42.896729 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:26:42.896739 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:26:42.896750 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:26:42.896761 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:26:42.896771 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:26:42.896782 | orchestrator | 2025-10-08 15:26:42.896809 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2025-10-08 15:26:42.896821 | orchestrator | Wednesday 08 October 2025 15:26:32 +0000 (0:00:01.426) 0:06:27.984 ***** 2025-10-08 15:26:42.896832 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-08 15:26:42.896851 | orchestrator | 2025-10-08 15:26:42.896862 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2025-10-08 15:26:42.896873 | orchestrator | Wednesday 08 October 2025 15:26:34 +0000 (0:00:01.156) 0:06:29.140 ***** 2025-10-08 15:26:42.896884 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:26:42.896894 | orchestrator | ok: [testbed-manager] 2025-10-08 15:26:42.896905 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:26:42.896916 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:26:42.896927 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:26:42.896938 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:26:42.896948 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:26:42.896959 | orchestrator | 2025-10-08 15:26:42.896986 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2025-10-08 15:26:42.896997 | orchestrator | Wednesday 08 October 2025 15:26:35 +0000 (0:00:01.357) 0:06:30.498 ***** 2025-10-08 15:26:42.897008 | orchestrator | ok: [testbed-manager] 2025-10-08 15:26:42.897018 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:26:42.897029 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:26:42.897039 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:26:42.897050 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:26:42.897061 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:26:42.897071 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:26:42.897082 | orchestrator | 2025-10-08 15:26:42.897093 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2025-10-08 15:26:42.897103 | orchestrator | Wednesday 08 October 2025 15:26:36 +0000 (0:00:01.181) 0:06:31.679 ***** 2025-10-08 15:26:42.897114 | orchestrator | ok: [testbed-manager] 2025-10-08 15:26:42.897125 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:26:42.897135 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:26:42.897146 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:26:42.897156 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:26:42.897167 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:26:42.897177 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:26:42.897188 | orchestrator | 2025-10-08 15:26:42.897199 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2025-10-08 15:26:42.897210 | orchestrator | Wednesday 08 October 2025 15:26:37 +0000 (0:00:01.181) 0:06:32.861 ***** 2025-10-08 15:26:42.897220 | orchestrator | ok: [testbed-manager] 2025-10-08 15:26:42.897231 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:26:42.897242 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:26:42.897252 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:26:42.897263 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:26:42.897273 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:26:42.897284 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:26:42.897294 | orchestrator | 2025-10-08 15:26:42.897305 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2025-10-08 15:26:42.897316 | orchestrator | Wednesday 08 October 2025 15:26:39 +0000 (0:00:01.292) 0:06:34.154 ***** 2025-10-08 15:26:42.897327 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-08 15:26:42.897338 | orchestrator | 2025-10-08 15:26:42.897349 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-10-08 15:26:42.897360 | orchestrator | Wednesday 08 October 2025 15:26:39 +0000 (0:00:00.890) 0:06:35.044 ***** 2025-10-08 15:26:42.897372 | orchestrator | 2025-10-08 15:26:42.897389 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-10-08 15:26:42.897407 | orchestrator | Wednesday 08 October 2025 15:26:40 +0000 (0:00:00.040) 0:06:35.085 ***** 2025-10-08 15:26:42.897425 | orchestrator | 2025-10-08 15:26:42.897442 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-10-08 15:26:42.897461 | orchestrator | Wednesday 08 October 2025 15:26:40 +0000 (0:00:00.038) 0:06:35.123 ***** 2025-10-08 15:26:42.897478 | orchestrator | 2025-10-08 15:26:42.897495 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-10-08 15:26:42.897520 | orchestrator | Wednesday 08 October 2025 15:26:40 +0000 (0:00:00.059) 0:06:35.183 ***** 2025-10-08 15:26:42.897531 | orchestrator | 2025-10-08 15:26:42.897542 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-10-08 15:26:42.897553 | orchestrator | Wednesday 08 October 2025 15:26:40 +0000 (0:00:00.041) 0:06:35.224 ***** 2025-10-08 15:26:42.897563 | orchestrator | 2025-10-08 15:26:42.897574 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-10-08 15:26:42.897584 | orchestrator | Wednesday 08 October 2025 15:26:40 +0000 (0:00:00.042) 0:06:35.267 ***** 2025-10-08 15:26:42.897595 | orchestrator | 2025-10-08 15:26:42.897606 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-10-08 15:26:42.897616 | orchestrator | Wednesday 08 October 2025 15:26:40 +0000 (0:00:00.053) 0:06:35.321 ***** 2025-10-08 15:26:42.897627 | orchestrator | 2025-10-08 15:26:42.897637 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-10-08 15:26:42.897648 | orchestrator | Wednesday 08 October 2025 15:26:40 +0000 (0:00:00.043) 0:06:35.364 ***** 2025-10-08 15:26:42.897659 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:26:42.897669 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:26:42.897680 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:26:42.897691 | orchestrator | 2025-10-08 15:26:42.897702 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2025-10-08 15:26:42.897712 | orchestrator | Wednesday 08 October 2025 15:26:41 +0000 (0:00:01.183) 0:06:36.548 ***** 2025-10-08 15:26:42.897723 | orchestrator | changed: [testbed-manager] 2025-10-08 15:26:42.897734 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:26:42.897744 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:26:42.897755 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:26:42.897766 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:26:42.897784 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:27:10.586088 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:27:10.586185 | orchestrator | 2025-10-08 15:27:10.586202 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2025-10-08 15:27:10.586215 | orchestrator | Wednesday 08 October 2025 15:26:42 +0000 (0:00:01.377) 0:06:37.926 ***** 2025-10-08 15:27:10.586227 | orchestrator | skipping: [testbed-manager] 2025-10-08 15:27:10.586238 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:27:10.586249 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:27:10.586259 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:27:10.586270 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:27:10.586281 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:27:10.586292 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:27:10.586302 | orchestrator | 2025-10-08 15:27:10.586313 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2025-10-08 15:27:10.586325 | orchestrator | Wednesday 08 October 2025 15:26:45 +0000 (0:00:02.478) 0:06:40.404 ***** 2025-10-08 15:27:10.586345 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:27:10.586356 | orchestrator | 2025-10-08 15:27:10.586367 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2025-10-08 15:27:10.586378 | orchestrator | Wednesday 08 October 2025 15:26:45 +0000 (0:00:00.114) 0:06:40.519 ***** 2025-10-08 15:27:10.586389 | orchestrator | ok: [testbed-manager] 2025-10-08 15:27:10.586400 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:27:10.586411 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:27:10.586422 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:27:10.586432 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:27:10.586443 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:27:10.586453 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:27:10.586464 | orchestrator | 2025-10-08 15:27:10.586475 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2025-10-08 15:27:10.586486 | orchestrator | Wednesday 08 October 2025 15:26:46 +0000 (0:00:01.110) 0:06:41.629 ***** 2025-10-08 15:27:10.586497 | orchestrator | skipping: [testbed-manager] 2025-10-08 15:27:10.586531 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:27:10.586542 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:27:10.586553 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:27:10.586564 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:27:10.586576 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:27:10.586588 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:27:10.586600 | orchestrator | 2025-10-08 15:27:10.586612 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2025-10-08 15:27:10.586624 | orchestrator | Wednesday 08 October 2025 15:26:47 +0000 (0:00:00.581) 0:06:42.210 ***** 2025-10-08 15:27:10.586637 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-08 15:27:10.586652 | orchestrator | 2025-10-08 15:27:10.586664 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2025-10-08 15:27:10.586676 | orchestrator | Wednesday 08 October 2025 15:26:48 +0000 (0:00:01.082) 0:06:43.293 ***** 2025-10-08 15:27:10.586688 | orchestrator | ok: [testbed-manager] 2025-10-08 15:27:10.586700 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:27:10.586712 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:27:10.586724 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:27:10.586737 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:27:10.586749 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:27:10.586761 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:27:10.586773 | orchestrator | 2025-10-08 15:27:10.586786 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2025-10-08 15:27:10.586798 | orchestrator | Wednesday 08 October 2025 15:26:49 +0000 (0:00:00.835) 0:06:44.129 ***** 2025-10-08 15:27:10.586810 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2025-10-08 15:27:10.586822 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2025-10-08 15:27:10.586834 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2025-10-08 15:27:10.586846 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2025-10-08 15:27:10.586858 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2025-10-08 15:27:10.586869 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2025-10-08 15:27:10.586881 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2025-10-08 15:27:10.586894 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2025-10-08 15:27:10.586907 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2025-10-08 15:27:10.586919 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2025-10-08 15:27:10.586929 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2025-10-08 15:27:10.586940 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2025-10-08 15:27:10.586950 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2025-10-08 15:27:10.586961 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2025-10-08 15:27:10.586993 | orchestrator | 2025-10-08 15:27:10.587004 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2025-10-08 15:27:10.587015 | orchestrator | Wednesday 08 October 2025 15:26:51 +0000 (0:00:02.512) 0:06:46.642 ***** 2025-10-08 15:27:10.587026 | orchestrator | skipping: [testbed-manager] 2025-10-08 15:27:10.587037 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:27:10.587047 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:27:10.587058 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:27:10.587069 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:27:10.587079 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:27:10.587090 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:27:10.587100 | orchestrator | 2025-10-08 15:27:10.587111 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2025-10-08 15:27:10.587122 | orchestrator | Wednesday 08 October 2025 15:26:52 +0000 (0:00:00.507) 0:06:47.149 ***** 2025-10-08 15:27:10.587166 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-08 15:27:10.587180 | orchestrator | 2025-10-08 15:27:10.587192 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2025-10-08 15:27:10.587203 | orchestrator | Wednesday 08 October 2025 15:26:53 +0000 (0:00:01.003) 0:06:48.153 ***** 2025-10-08 15:27:10.587213 | orchestrator | ok: [testbed-manager] 2025-10-08 15:27:10.587224 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:27:10.587235 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:27:10.587245 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:27:10.587256 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:27:10.587267 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:27:10.587278 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:27:10.587288 | orchestrator | 2025-10-08 15:27:10.587299 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2025-10-08 15:27:10.587316 | orchestrator | Wednesday 08 October 2025 15:26:53 +0000 (0:00:00.854) 0:06:49.008 ***** 2025-10-08 15:27:10.587327 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:27:10.587338 | orchestrator | ok: [testbed-manager] 2025-10-08 15:27:10.587349 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:27:10.587359 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:27:10.587370 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:27:10.587380 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:27:10.587391 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:27:10.587402 | orchestrator | 2025-10-08 15:27:10.587413 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2025-10-08 15:27:10.587423 | orchestrator | Wednesday 08 October 2025 15:26:54 +0000 (0:00:00.901) 0:06:49.909 ***** 2025-10-08 15:27:10.587434 | orchestrator | skipping: [testbed-manager] 2025-10-08 15:27:10.587445 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:27:10.587456 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:27:10.587466 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:27:10.587477 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:27:10.587487 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:27:10.587498 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:27:10.587509 | orchestrator | 2025-10-08 15:27:10.587519 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2025-10-08 15:27:10.587530 | orchestrator | Wednesday 08 October 2025 15:26:55 +0000 (0:00:00.728) 0:06:50.638 ***** 2025-10-08 15:27:10.587541 | orchestrator | ok: [testbed-manager] 2025-10-08 15:27:10.587551 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:27:10.587562 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:27:10.587573 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:27:10.587583 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:27:10.587594 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:27:10.587605 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:27:10.587615 | orchestrator | 2025-10-08 15:27:10.587626 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2025-10-08 15:27:10.587637 | orchestrator | Wednesday 08 October 2025 15:26:57 +0000 (0:00:01.427) 0:06:52.066 ***** 2025-10-08 15:27:10.587648 | orchestrator | skipping: [testbed-manager] 2025-10-08 15:27:10.587658 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:27:10.587669 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:27:10.587680 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:27:10.587690 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:27:10.587701 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:27:10.587712 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:27:10.587723 | orchestrator | 2025-10-08 15:27:10.587733 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2025-10-08 15:27:10.587744 | orchestrator | Wednesday 08 October 2025 15:26:57 +0000 (0:00:00.535) 0:06:52.602 ***** 2025-10-08 15:27:10.587755 | orchestrator | ok: [testbed-manager] 2025-10-08 15:27:10.587772 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:27:10.587783 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:27:10.587793 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:27:10.587804 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:27:10.587814 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:27:10.587825 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:27:10.587836 | orchestrator | 2025-10-08 15:27:10.587846 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2025-10-08 15:27:10.587857 | orchestrator | Wednesday 08 October 2025 15:27:04 +0000 (0:00:07.277) 0:06:59.879 ***** 2025-10-08 15:27:10.587868 | orchestrator | ok: [testbed-manager] 2025-10-08 15:27:10.587879 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:27:10.587889 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:27:10.587900 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:27:10.587910 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:27:10.587921 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:27:10.587931 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:27:10.587942 | orchestrator | 2025-10-08 15:27:10.587953 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2025-10-08 15:27:10.587963 | orchestrator | Wednesday 08 October 2025 15:27:06 +0000 (0:00:01.347) 0:07:01.227 ***** 2025-10-08 15:27:10.587998 | orchestrator | ok: [testbed-manager] 2025-10-08 15:27:10.588009 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:27:10.588020 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:27:10.588031 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:27:10.588041 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:27:10.588052 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:27:10.588062 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:27:10.588073 | orchestrator | 2025-10-08 15:27:10.588083 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2025-10-08 15:27:10.588094 | orchestrator | Wednesday 08 October 2025 15:27:08 +0000 (0:00:01.917) 0:07:03.144 ***** 2025-10-08 15:27:10.588105 | orchestrator | ok: [testbed-manager] 2025-10-08 15:27:10.588115 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:27:10.588126 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:27:10.588137 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:27:10.588147 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:27:10.588158 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:27:10.588168 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:27:10.588179 | orchestrator | 2025-10-08 15:27:10.588190 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-10-08 15:27:10.588201 | orchestrator | Wednesday 08 October 2025 15:27:09 +0000 (0:00:01.647) 0:07:04.792 ***** 2025-10-08 15:27:10.588211 | orchestrator | ok: [testbed-manager] 2025-10-08 15:27:10.588222 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:27:10.588233 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:27:10.588243 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:27:10.588260 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:27:41.539059 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:27:41.539166 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:27:41.539181 | orchestrator | 2025-10-08 15:27:41.539193 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-10-08 15:27:41.539206 | orchestrator | Wednesday 08 October 2025 15:27:10 +0000 (0:00:00.822) 0:07:05.614 ***** 2025-10-08 15:27:41.539217 | orchestrator | skipping: [testbed-manager] 2025-10-08 15:27:41.539228 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:27:41.539239 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:27:41.539250 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:27:41.539261 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:27:41.539272 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:27:41.539282 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:27:41.539293 | orchestrator | 2025-10-08 15:27:41.539305 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2025-10-08 15:27:41.539335 | orchestrator | Wednesday 08 October 2025 15:27:11 +0000 (0:00:00.983) 0:07:06.597 ***** 2025-10-08 15:27:41.539347 | orchestrator | skipping: [testbed-manager] 2025-10-08 15:27:41.539357 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:27:41.539368 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:27:41.539379 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:27:41.539390 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:27:41.539400 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:27:41.539411 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:27:41.539422 | orchestrator | 2025-10-08 15:27:41.539432 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2025-10-08 15:27:41.539443 | orchestrator | Wednesday 08 October 2025 15:27:12 +0000 (0:00:00.511) 0:07:07.109 ***** 2025-10-08 15:27:41.539454 | orchestrator | ok: [testbed-manager] 2025-10-08 15:27:41.539464 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:27:41.539475 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:27:41.539486 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:27:41.539496 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:27:41.539506 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:27:41.539517 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:27:41.539527 | orchestrator | 2025-10-08 15:27:41.539538 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2025-10-08 15:27:41.539549 | orchestrator | Wednesday 08 October 2025 15:27:12 +0000 (0:00:00.539) 0:07:07.649 ***** 2025-10-08 15:27:41.539560 | orchestrator | ok: [testbed-manager] 2025-10-08 15:27:41.539573 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:27:41.539585 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:27:41.539597 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:27:41.539609 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:27:41.539620 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:27:41.539632 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:27:41.539644 | orchestrator | 2025-10-08 15:27:41.539656 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2025-10-08 15:27:41.539668 | orchestrator | Wednesday 08 October 2025 15:27:13 +0000 (0:00:00.524) 0:07:08.174 ***** 2025-10-08 15:27:41.539680 | orchestrator | ok: [testbed-manager] 2025-10-08 15:27:41.539692 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:27:41.539704 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:27:41.539716 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:27:41.539728 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:27:41.539740 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:27:41.539751 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:27:41.539764 | orchestrator | 2025-10-08 15:27:41.539775 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2025-10-08 15:27:41.539787 | orchestrator | Wednesday 08 October 2025 15:27:13 +0000 (0:00:00.519) 0:07:08.693 ***** 2025-10-08 15:27:41.539799 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:27:41.539810 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:27:41.539822 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:27:41.539834 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:27:41.539845 | orchestrator | ok: [testbed-manager] 2025-10-08 15:27:41.539857 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:27:41.539869 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:27:41.539881 | orchestrator | 2025-10-08 15:27:41.539893 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2025-10-08 15:27:41.539905 | orchestrator | Wednesday 08 October 2025 15:27:19 +0000 (0:00:05.749) 0:07:14.443 ***** 2025-10-08 15:27:41.539917 | orchestrator | skipping: [testbed-manager] 2025-10-08 15:27:41.539928 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:27:41.539949 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:27:41.539960 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:27:41.539971 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:27:41.540004 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:27:41.540015 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:27:41.540026 | orchestrator | 2025-10-08 15:27:41.540037 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2025-10-08 15:27:41.540054 | orchestrator | Wednesday 08 October 2025 15:27:19 +0000 (0:00:00.594) 0:07:15.038 ***** 2025-10-08 15:27:41.540068 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-08 15:27:41.540081 | orchestrator | 2025-10-08 15:27:41.540093 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2025-10-08 15:27:41.540103 | orchestrator | Wednesday 08 October 2025 15:27:20 +0000 (0:00:00.833) 0:07:15.871 ***** 2025-10-08 15:27:41.540114 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:27:41.540125 | orchestrator | ok: [testbed-manager] 2025-10-08 15:27:41.540135 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:27:41.540146 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:27:41.540156 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:27:41.540167 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:27:41.540177 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:27:41.540188 | orchestrator | 2025-10-08 15:27:41.540199 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2025-10-08 15:27:41.540210 | orchestrator | Wednesday 08 October 2025 15:27:22 +0000 (0:00:02.061) 0:07:17.932 ***** 2025-10-08 15:27:41.540220 | orchestrator | ok: [testbed-manager] 2025-10-08 15:27:41.540231 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:27:41.540242 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:27:41.540252 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:27:41.540263 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:27:41.540273 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:27:41.540284 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:27:41.540294 | orchestrator | 2025-10-08 15:27:41.540327 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2025-10-08 15:27:41.540339 | orchestrator | Wednesday 08 October 2025 15:27:24 +0000 (0:00:01.195) 0:07:19.128 ***** 2025-10-08 15:27:41.540350 | orchestrator | ok: [testbed-manager] 2025-10-08 15:27:41.540360 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:27:41.540371 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:27:41.540381 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:27:41.540392 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:27:41.540402 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:27:41.540413 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:27:41.540424 | orchestrator | 2025-10-08 15:27:41.540435 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2025-10-08 15:27:41.540445 | orchestrator | Wednesday 08 October 2025 15:27:24 +0000 (0:00:00.846) 0:07:19.975 ***** 2025-10-08 15:27:41.540461 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-10-08 15:27:41.540474 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-10-08 15:27:41.540484 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-10-08 15:27:41.540495 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-10-08 15:27:41.540506 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-10-08 15:27:41.540517 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-10-08 15:27:41.540528 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-10-08 15:27:41.540538 | orchestrator | 2025-10-08 15:27:41.540549 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2025-10-08 15:27:41.540566 | orchestrator | Wednesday 08 October 2025 15:27:26 +0000 (0:00:01.755) 0:07:21.731 ***** 2025-10-08 15:27:41.540577 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-08 15:27:41.540588 | orchestrator | 2025-10-08 15:27:41.540599 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2025-10-08 15:27:41.540610 | orchestrator | Wednesday 08 October 2025 15:27:27 +0000 (0:00:01.040) 0:07:22.772 ***** 2025-10-08 15:27:41.540621 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:27:41.540631 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:27:41.540642 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:27:41.540653 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:27:41.540663 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:27:41.540674 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:27:41.540684 | orchestrator | changed: [testbed-manager] 2025-10-08 15:27:41.540695 | orchestrator | 2025-10-08 15:27:41.540706 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2025-10-08 15:27:41.540716 | orchestrator | Wednesday 08 October 2025 15:27:36 +0000 (0:00:08.680) 0:07:31.453 ***** 2025-10-08 15:27:41.540727 | orchestrator | ok: [testbed-manager] 2025-10-08 15:27:41.540737 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:27:41.540748 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:27:41.540759 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:27:41.540769 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:27:41.540780 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:27:41.540790 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:27:41.540801 | orchestrator | 2025-10-08 15:27:41.540812 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2025-10-08 15:27:41.540823 | orchestrator | Wednesday 08 October 2025 15:27:38 +0000 (0:00:01.948) 0:07:33.401 ***** 2025-10-08 15:27:41.540833 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:27:41.540843 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:27:41.540854 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:27:41.540864 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:27:41.540875 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:27:41.540885 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:27:41.540896 | orchestrator | 2025-10-08 15:27:41.540907 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2025-10-08 15:27:41.540917 | orchestrator | Wednesday 08 October 2025 15:27:39 +0000 (0:00:01.325) 0:07:34.726 ***** 2025-10-08 15:27:41.540928 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:27:41.540939 | orchestrator | changed: [testbed-manager] 2025-10-08 15:27:41.540950 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:27:41.540960 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:27:41.540971 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:27:41.540997 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:27:41.541008 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:27:41.541019 | orchestrator | 2025-10-08 15:27:41.541029 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2025-10-08 15:27:41.541040 | orchestrator | 2025-10-08 15:27:41.541050 | orchestrator | TASK [Include hardening role] ************************************************** 2025-10-08 15:27:41.541061 | orchestrator | Wednesday 08 October 2025 15:27:40 +0000 (0:00:01.295) 0:07:36.022 ***** 2025-10-08 15:27:41.541072 | orchestrator | skipping: [testbed-manager] 2025-10-08 15:27:41.541082 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:27:41.541093 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:27:41.541103 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:27:41.541114 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:27:41.541125 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:27:41.541142 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:28:08.015243 | orchestrator | 2025-10-08 15:28:08.015355 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2025-10-08 15:28:08.015401 | orchestrator | 2025-10-08 15:28:08.015413 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2025-10-08 15:28:08.015425 | orchestrator | Wednesday 08 October 2025 15:27:41 +0000 (0:00:00.550) 0:07:36.572 ***** 2025-10-08 15:28:08.015436 | orchestrator | changed: [testbed-manager] 2025-10-08 15:28:08.015448 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:28:08.015458 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:28:08.015469 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:28:08.015479 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:28:08.015490 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:28:08.015500 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:28:08.015511 | orchestrator | 2025-10-08 15:28:08.015535 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2025-10-08 15:28:08.015547 | orchestrator | Wednesday 08 October 2025 15:27:43 +0000 (0:00:01.695) 0:07:38.268 ***** 2025-10-08 15:28:08.015557 | orchestrator | ok: [testbed-manager] 2025-10-08 15:28:08.015569 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:28:08.015579 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:28:08.015590 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:28:08.015601 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:28:08.015611 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:28:08.015622 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:28:08.015632 | orchestrator | 2025-10-08 15:28:08.015643 | orchestrator | TASK [Include auditd role] ***************************************************** 2025-10-08 15:28:08.015654 | orchestrator | Wednesday 08 October 2025 15:27:44 +0000 (0:00:01.415) 0:07:39.683 ***** 2025-10-08 15:28:08.015664 | orchestrator | skipping: [testbed-manager] 2025-10-08 15:28:08.015675 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:28:08.015686 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:28:08.015696 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:28:08.015707 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:28:08.015717 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:28:08.015728 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:28:08.015741 | orchestrator | 2025-10-08 15:28:08.015754 | orchestrator | TASK [Include smartd role] ***************************************************** 2025-10-08 15:28:08.015766 | orchestrator | Wednesday 08 October 2025 15:27:45 +0000 (0:00:00.493) 0:07:40.176 ***** 2025-10-08 15:28:08.015779 | orchestrator | included: osism.services.smartd for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-08 15:28:08.015793 | orchestrator | 2025-10-08 15:28:08.015805 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2025-10-08 15:28:08.015818 | orchestrator | Wednesday 08 October 2025 15:27:46 +0000 (0:00:01.033) 0:07:41.210 ***** 2025-10-08 15:28:08.015832 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-08 15:28:08.015847 | orchestrator | 2025-10-08 15:28:08.015859 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2025-10-08 15:28:08.015871 | orchestrator | Wednesday 08 October 2025 15:27:47 +0000 (0:00:00.843) 0:07:42.053 ***** 2025-10-08 15:28:08.015884 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:28:08.015896 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:28:08.015908 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:28:08.015921 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:28:08.015933 | orchestrator | changed: [testbed-manager] 2025-10-08 15:28:08.015946 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:28:08.015958 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:28:08.015971 | orchestrator | 2025-10-08 15:28:08.015982 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2025-10-08 15:28:08.016018 | orchestrator | Wednesday 08 October 2025 15:27:55 +0000 (0:00:08.068) 0:07:50.122 ***** 2025-10-08 15:28:08.016029 | orchestrator | changed: [testbed-manager] 2025-10-08 15:28:08.016048 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:28:08.016059 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:28:08.016069 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:28:08.016080 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:28:08.016090 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:28:08.016101 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:28:08.016111 | orchestrator | 2025-10-08 15:28:08.016122 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2025-10-08 15:28:08.016133 | orchestrator | Wednesday 08 October 2025 15:27:55 +0000 (0:00:00.853) 0:07:50.975 ***** 2025-10-08 15:28:08.016144 | orchestrator | changed: [testbed-manager] 2025-10-08 15:28:08.016155 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:28:08.016166 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:28:08.016176 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:28:08.016187 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:28:08.016198 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:28:08.016208 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:28:08.016219 | orchestrator | 2025-10-08 15:28:08.016230 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2025-10-08 15:28:08.016240 | orchestrator | Wednesday 08 October 2025 15:27:57 +0000 (0:00:01.576) 0:07:52.552 ***** 2025-10-08 15:28:08.016251 | orchestrator | changed: [testbed-manager] 2025-10-08 15:28:08.016262 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:28:08.016272 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:28:08.016283 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:28:08.016293 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:28:08.016304 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:28:08.016314 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:28:08.016325 | orchestrator | 2025-10-08 15:28:08.016335 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2025-10-08 15:28:08.016346 | orchestrator | Wednesday 08 October 2025 15:27:59 +0000 (0:00:01.793) 0:07:54.346 ***** 2025-10-08 15:28:08.016357 | orchestrator | changed: [testbed-manager] 2025-10-08 15:28:08.016367 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:28:08.016378 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:28:08.016388 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:28:08.016417 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:28:08.016428 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:28:08.016439 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:28:08.016449 | orchestrator | 2025-10-08 15:28:08.016460 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2025-10-08 15:28:08.016471 | orchestrator | Wednesday 08 October 2025 15:28:00 +0000 (0:00:01.450) 0:07:55.797 ***** 2025-10-08 15:28:08.016482 | orchestrator | changed: [testbed-manager] 2025-10-08 15:28:08.016492 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:28:08.016503 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:28:08.016513 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:28:08.016524 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:28:08.016534 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:28:08.016545 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:28:08.016556 | orchestrator | 2025-10-08 15:28:08.016566 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2025-10-08 15:28:08.016577 | orchestrator | 2025-10-08 15:28:08.016593 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2025-10-08 15:28:08.016605 | orchestrator | Wednesday 08 October 2025 15:28:01 +0000 (0:00:01.135) 0:07:56.933 ***** 2025-10-08 15:28:08.016616 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-08 15:28:08.016627 | orchestrator | 2025-10-08 15:28:08.016638 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-10-08 15:28:08.016648 | orchestrator | Wednesday 08 October 2025 15:28:02 +0000 (0:00:00.852) 0:07:57.785 ***** 2025-10-08 15:28:08.016666 | orchestrator | ok: [testbed-manager] 2025-10-08 15:28:08.016676 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:28:08.016687 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:28:08.016698 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:28:08.016708 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:28:08.016719 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:28:08.016730 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:28:08.016740 | orchestrator | 2025-10-08 15:28:08.016751 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-10-08 15:28:08.016762 | orchestrator | Wednesday 08 October 2025 15:28:03 +0000 (0:00:00.830) 0:07:58.616 ***** 2025-10-08 15:28:08.016773 | orchestrator | changed: [testbed-manager] 2025-10-08 15:28:08.016784 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:28:08.016794 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:28:08.016805 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:28:08.016815 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:28:08.016826 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:28:08.016836 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:28:08.016847 | orchestrator | 2025-10-08 15:28:08.016858 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2025-10-08 15:28:08.016868 | orchestrator | Wednesday 08 October 2025 15:28:04 +0000 (0:00:01.355) 0:07:59.971 ***** 2025-10-08 15:28:08.016879 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-08 15:28:08.016890 | orchestrator | 2025-10-08 15:28:08.016901 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-10-08 15:28:08.016911 | orchestrator | Wednesday 08 October 2025 15:28:05 +0000 (0:00:00.865) 0:08:00.837 ***** 2025-10-08 15:28:08.016922 | orchestrator | ok: [testbed-manager] 2025-10-08 15:28:08.016933 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:28:08.016943 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:28:08.016954 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:28:08.016964 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:28:08.016975 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:28:08.017003 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:28:08.017015 | orchestrator | 2025-10-08 15:28:08.017026 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-10-08 15:28:08.017036 | orchestrator | Wednesday 08 October 2025 15:28:06 +0000 (0:00:00.827) 0:08:01.665 ***** 2025-10-08 15:28:08.017047 | orchestrator | changed: [testbed-manager] 2025-10-08 15:28:08.017058 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:28:08.017069 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:28:08.017079 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:28:08.017090 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:28:08.017100 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:28:08.017111 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:28:08.017122 | orchestrator | 2025-10-08 15:28:08.017132 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-08 15:28:08.017145 | orchestrator | testbed-manager : ok=164  changed=38  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2025-10-08 15:28:08.017156 | orchestrator | testbed-node-0 : ok=173  changed=67  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-10-08 15:28:08.017167 | orchestrator | testbed-node-1 : ok=173  changed=67  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-10-08 15:28:08.017178 | orchestrator | testbed-node-2 : ok=173  changed=67  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-10-08 15:28:08.017189 | orchestrator | testbed-node-3 : ok=171  changed=63  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-10-08 15:28:08.017206 | orchestrator | testbed-node-4 : ok=171  changed=63  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-10-08 15:28:08.017217 | orchestrator | testbed-node-5 : ok=171  changed=63  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-10-08 15:28:08.017227 | orchestrator | 2025-10-08 15:28:08.017238 | orchestrator | 2025-10-08 15:28:08.017256 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-08 15:28:08.482117 | orchestrator | Wednesday 08 October 2025 15:28:07 +0000 (0:00:01.369) 0:08:03.035 ***** 2025-10-08 15:28:08.482214 | orchestrator | =============================================================================== 2025-10-08 15:28:08.482226 | orchestrator | osism.commons.packages : Install required packages --------------------- 77.08s 2025-10-08 15:28:08.482238 | orchestrator | osism.commons.packages : Download required packages -------------------- 41.37s 2025-10-08 15:28:08.482249 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 33.85s 2025-10-08 15:28:08.482260 | orchestrator | osism.commons.repository : Update package cache ------------------------ 17.43s 2025-10-08 15:28:08.482290 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 13.23s 2025-10-08 15:28:08.482302 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 12.62s 2025-10-08 15:28:08.482313 | orchestrator | osism.services.docker : Install docker package ------------------------- 10.67s 2025-10-08 15:28:08.482324 | orchestrator | osism.services.docker : Install containerd package ---------------------- 9.64s 2025-10-08 15:28:08.482335 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 8.68s 2025-10-08 15:28:08.482346 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 8.62s 2025-10-08 15:28:08.482356 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 8.07s 2025-10-08 15:28:08.482367 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 7.85s 2025-10-08 15:28:08.482378 | orchestrator | osism.services.docker : Add repository ---------------------------------- 7.77s 2025-10-08 15:28:08.482389 | orchestrator | osism.services.rng : Install rng package -------------------------------- 7.45s 2025-10-08 15:28:08.482399 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 7.38s 2025-10-08 15:28:08.482410 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 7.28s 2025-10-08 15:28:08.482421 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 6.16s 2025-10-08 15:28:08.482431 | orchestrator | osism.services.chrony : Populate service facts -------------------------- 5.75s 2025-10-08 15:28:08.482442 | orchestrator | osism.commons.services : Populate service facts ------------------------- 5.73s 2025-10-08 15:28:08.482453 | orchestrator | osism.commons.cleanup : Populate service facts -------------------------- 5.69s 2025-10-08 15:28:08.796421 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-10-08 15:28:08.796508 | orchestrator | + osism apply network 2025-10-08 15:28:21.459758 | orchestrator | 2025-10-08 15:28:21 | INFO  | Task 4127a700-7f1e-4a12-8151-b3545dacb72f (network) was prepared for execution. 2025-10-08 15:28:21.459871 | orchestrator | 2025-10-08 15:28:21 | INFO  | It takes a moment until task 4127a700-7f1e-4a12-8151-b3545dacb72f (network) has been started and output is visible here. 2025-10-08 15:28:51.342742 | orchestrator | 2025-10-08 15:28:51.342854 | orchestrator | PLAY [Apply role network] ****************************************************** 2025-10-08 15:28:51.342871 | orchestrator | 2025-10-08 15:28:51.342884 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2025-10-08 15:28:51.342896 | orchestrator | Wednesday 08 October 2025 15:28:26 +0000 (0:00:00.285) 0:00:00.285 ***** 2025-10-08 15:28:51.342907 | orchestrator | ok: [testbed-manager] 2025-10-08 15:28:51.342919 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:28:51.342930 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:28:51.342941 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:28:51.342952 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:28:51.342990 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:28:51.343054 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:28:51.343066 | orchestrator | 2025-10-08 15:28:51.343078 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2025-10-08 15:28:51.343089 | orchestrator | Wednesday 08 October 2025 15:28:26 +0000 (0:00:00.757) 0:00:01.042 ***** 2025-10-08 15:28:51.343102 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-08 15:28:51.343116 | orchestrator | 2025-10-08 15:28:51.343128 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2025-10-08 15:28:51.343138 | orchestrator | Wednesday 08 October 2025 15:28:27 +0000 (0:00:01.209) 0:00:02.252 ***** 2025-10-08 15:28:51.343149 | orchestrator | ok: [testbed-manager] 2025-10-08 15:28:51.343160 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:28:51.343170 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:28:51.343181 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:28:51.343191 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:28:51.343202 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:28:51.343213 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:28:51.343223 | orchestrator | 2025-10-08 15:28:51.343234 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2025-10-08 15:28:51.343245 | orchestrator | Wednesday 08 October 2025 15:28:29 +0000 (0:00:02.011) 0:00:04.263 ***** 2025-10-08 15:28:51.343256 | orchestrator | ok: [testbed-manager] 2025-10-08 15:28:51.343267 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:28:51.343279 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:28:51.343290 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:28:51.343303 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:28:51.343314 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:28:51.343326 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:28:51.343337 | orchestrator | 2025-10-08 15:28:51.343349 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2025-10-08 15:28:51.343361 | orchestrator | Wednesday 08 October 2025 15:28:31 +0000 (0:00:01.812) 0:00:06.076 ***** 2025-10-08 15:28:51.343373 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2025-10-08 15:28:51.343387 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2025-10-08 15:28:51.343399 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2025-10-08 15:28:51.343410 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2025-10-08 15:28:51.343423 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2025-10-08 15:28:51.343435 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2025-10-08 15:28:51.343447 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2025-10-08 15:28:51.343458 | orchestrator | 2025-10-08 15:28:51.343470 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2025-10-08 15:28:51.343482 | orchestrator | Wednesday 08 October 2025 15:28:32 +0000 (0:00:01.025) 0:00:07.101 ***** 2025-10-08 15:28:51.343494 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-10-08 15:28:51.343507 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-10-08 15:28:51.343534 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-10-08 15:28:51.343546 | orchestrator | ok: [testbed-manager -> localhost] 2025-10-08 15:28:51.343558 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-10-08 15:28:51.343569 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-10-08 15:28:51.343581 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-10-08 15:28:51.343593 | orchestrator | 2025-10-08 15:28:51.343605 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2025-10-08 15:28:51.343617 | orchestrator | Wednesday 08 October 2025 15:28:36 +0000 (0:00:03.297) 0:00:10.399 ***** 2025-10-08 15:28:51.343628 | orchestrator | changed: [testbed-manager] 2025-10-08 15:28:51.343639 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:28:51.343649 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:28:51.343669 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:28:51.343680 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:28:51.343690 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:28:51.343701 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:28:51.343712 | orchestrator | 2025-10-08 15:28:51.343722 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2025-10-08 15:28:51.343733 | orchestrator | Wednesday 08 October 2025 15:28:37 +0000 (0:00:01.586) 0:00:11.985 ***** 2025-10-08 15:28:51.343744 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-10-08 15:28:51.343755 | orchestrator | ok: [testbed-manager -> localhost] 2025-10-08 15:28:51.343765 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-10-08 15:28:51.343776 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-10-08 15:28:51.343786 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-10-08 15:28:51.343797 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-10-08 15:28:51.343807 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-10-08 15:28:51.343818 | orchestrator | 2025-10-08 15:28:51.343829 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2025-10-08 15:28:51.343840 | orchestrator | Wednesday 08 October 2025 15:28:39 +0000 (0:00:01.722) 0:00:13.707 ***** 2025-10-08 15:28:51.343850 | orchestrator | ok: [testbed-manager] 2025-10-08 15:28:51.343861 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:28:51.343872 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:28:51.343882 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:28:51.343893 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:28:51.343904 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:28:51.343914 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:28:51.343925 | orchestrator | 2025-10-08 15:28:51.343936 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2025-10-08 15:28:51.343964 | orchestrator | Wednesday 08 October 2025 15:28:40 +0000 (0:00:01.140) 0:00:14.848 ***** 2025-10-08 15:28:51.343975 | orchestrator | skipping: [testbed-manager] 2025-10-08 15:28:51.343986 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:28:51.344013 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:28:51.344025 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:28:51.344036 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:28:51.344046 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:28:51.344057 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:28:51.344068 | orchestrator | 2025-10-08 15:28:51.344078 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2025-10-08 15:28:51.344089 | orchestrator | Wednesday 08 October 2025 15:28:41 +0000 (0:00:00.662) 0:00:15.511 ***** 2025-10-08 15:28:51.344100 | orchestrator | ok: [testbed-manager] 2025-10-08 15:28:51.344111 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:28:51.344121 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:28:51.344132 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:28:51.344142 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:28:51.344153 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:28:51.344163 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:28:51.344174 | orchestrator | 2025-10-08 15:28:51.344185 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2025-10-08 15:28:51.344196 | orchestrator | Wednesday 08 October 2025 15:28:43 +0000 (0:00:02.257) 0:00:17.769 ***** 2025-10-08 15:28:51.344206 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:28:51.344217 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:28:51.344228 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:28:51.344238 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:28:51.344249 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:28:51.344259 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:28:51.344270 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2025-10-08 15:28:51.344283 | orchestrator | 2025-10-08 15:28:51.344293 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2025-10-08 15:28:51.344304 | orchestrator | Wednesday 08 October 2025 15:28:44 +0000 (0:00:00.930) 0:00:18.699 ***** 2025-10-08 15:28:51.344322 | orchestrator | ok: [testbed-manager] 2025-10-08 15:28:51.344333 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:28:51.344343 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:28:51.344354 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:28:51.344365 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:28:51.344375 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:28:51.344386 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:28:51.344396 | orchestrator | 2025-10-08 15:28:51.344407 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2025-10-08 15:28:51.344418 | orchestrator | Wednesday 08 October 2025 15:28:46 +0000 (0:00:01.688) 0:00:20.388 ***** 2025-10-08 15:28:51.344429 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-08 15:28:51.344441 | orchestrator | 2025-10-08 15:28:51.344452 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-10-08 15:28:51.344463 | orchestrator | Wednesday 08 October 2025 15:28:47 +0000 (0:00:01.304) 0:00:21.692 ***** 2025-10-08 15:28:51.344474 | orchestrator | ok: [testbed-manager] 2025-10-08 15:28:51.344484 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:28:51.344495 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:28:51.344505 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:28:51.344516 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:28:51.344527 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:28:51.344537 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:28:51.344548 | orchestrator | 2025-10-08 15:28:51.344559 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2025-10-08 15:28:51.344570 | orchestrator | Wednesday 08 October 2025 15:28:48 +0000 (0:00:01.572) 0:00:23.265 ***** 2025-10-08 15:28:51.344580 | orchestrator | ok: [testbed-manager] 2025-10-08 15:28:51.344591 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:28:51.344601 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:28:51.344612 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:28:51.344623 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:28:51.344633 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:28:51.344644 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:28:51.344654 | orchestrator | 2025-10-08 15:28:51.344665 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-10-08 15:28:51.344676 | orchestrator | Wednesday 08 October 2025 15:28:49 +0000 (0:00:00.895) 0:00:24.160 ***** 2025-10-08 15:28:51.344687 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2025-10-08 15:28:51.344697 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2025-10-08 15:28:51.344708 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2025-10-08 15:28:51.344719 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2025-10-08 15:28:51.344729 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2025-10-08 15:28:51.344740 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2025-10-08 15:28:51.344759 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2025-10-08 15:28:51.344770 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2025-10-08 15:28:51.344780 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2025-10-08 15:28:51.344791 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2025-10-08 15:28:51.344802 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2025-10-08 15:28:51.344812 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2025-10-08 15:28:51.344823 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2025-10-08 15:28:51.344834 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2025-10-08 15:28:51.344851 | orchestrator | 2025-10-08 15:28:51.344869 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2025-10-08 15:29:07.850183 | orchestrator | Wednesday 08 October 2025 15:28:51 +0000 (0:00:01.429) 0:00:25.590 ***** 2025-10-08 15:29:07.850302 | orchestrator | skipping: [testbed-manager] 2025-10-08 15:29:07.850319 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:29:07.850331 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:29:07.850343 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:29:07.850354 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:29:07.850365 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:29:07.850376 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:29:07.850387 | orchestrator | 2025-10-08 15:29:07.850399 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2025-10-08 15:29:07.850410 | orchestrator | Wednesday 08 October 2025 15:28:51 +0000 (0:00:00.655) 0:00:26.245 ***** 2025-10-08 15:29:07.850424 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-08 15:29:07.850438 | orchestrator | 2025-10-08 15:29:07.850450 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2025-10-08 15:29:07.850461 | orchestrator | Wednesday 08 October 2025 15:28:56 +0000 (0:00:04.740) 0:00:30.985 ***** 2025-10-08 15:29:07.850473 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-10-08 15:29:07.850498 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-10-08 15:29:07.850511 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-10-08 15:29:07.850523 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-10-08 15:29:07.850534 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-10-08 15:29:07.850557 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-10-08 15:29:07.850569 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-10-08 15:29:07.850580 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-10-08 15:29:07.850591 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-10-08 15:29:07.850624 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-10-08 15:29:07.850636 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-10-08 15:29:07.850665 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-10-08 15:29:07.850676 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-10-08 15:29:07.850687 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-10-08 15:29:07.850698 | orchestrator | 2025-10-08 15:29:07.850709 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2025-10-08 15:29:07.850720 | orchestrator | Wednesday 08 October 2025 15:29:02 +0000 (0:00:05.804) 0:00:36.790 ***** 2025-10-08 15:29:07.850731 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-10-08 15:29:07.850743 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-10-08 15:29:07.850754 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-10-08 15:29:07.850765 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-10-08 15:29:07.850776 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-10-08 15:29:07.850787 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-10-08 15:29:07.850803 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-10-08 15:29:07.850815 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-10-08 15:29:07.850835 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-10-08 15:29:07.850846 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-10-08 15:29:07.850857 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-10-08 15:29:07.850869 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-10-08 15:29:07.850887 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-10-08 15:29:14.154758 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-10-08 15:29:14.154871 | orchestrator | 2025-10-08 15:29:14.154888 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2025-10-08 15:29:14.154901 | orchestrator | Wednesday 08 October 2025 15:29:07 +0000 (0:00:05.303) 0:00:42.094 ***** 2025-10-08 15:29:14.154915 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-08 15:29:14.154927 | orchestrator | 2025-10-08 15:29:14.154938 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-10-08 15:29:14.154949 | orchestrator | Wednesday 08 October 2025 15:29:09 +0000 (0:00:01.305) 0:00:43.399 ***** 2025-10-08 15:29:14.154960 | orchestrator | ok: [testbed-manager] 2025-10-08 15:29:14.154972 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:29:14.154982 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:29:14.154993 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:29:14.155060 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:29:14.155072 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:29:14.155083 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:29:14.155094 | orchestrator | 2025-10-08 15:29:14.155105 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-10-08 15:29:14.155116 | orchestrator | Wednesday 08 October 2025 15:29:10 +0000 (0:00:01.187) 0:00:44.586 ***** 2025-10-08 15:29:14.155127 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2025-10-08 15:29:14.155139 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2025-10-08 15:29:14.155150 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-10-08 15:29:14.155160 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-10-08 15:29:14.155171 | orchestrator | skipping: [testbed-manager] 2025-10-08 15:29:14.155182 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2025-10-08 15:29:14.155193 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2025-10-08 15:29:14.155229 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-10-08 15:29:14.155240 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-10-08 15:29:14.155251 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:29:14.155262 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2025-10-08 15:29:14.155273 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2025-10-08 15:29:14.155285 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-10-08 15:29:14.155311 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-10-08 15:29:14.155323 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:29:14.155336 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2025-10-08 15:29:14.155349 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2025-10-08 15:29:14.155362 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-10-08 15:29:14.155373 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-10-08 15:29:14.155383 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:29:14.155394 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2025-10-08 15:29:14.155405 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2025-10-08 15:29:14.155415 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-10-08 15:29:14.155426 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-10-08 15:29:14.155436 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:29:14.155447 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2025-10-08 15:29:14.155458 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2025-10-08 15:29:14.155468 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-10-08 15:29:14.155479 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-10-08 15:29:14.155489 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:29:14.155500 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2025-10-08 15:29:14.155511 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2025-10-08 15:29:14.155522 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-10-08 15:29:14.155533 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-10-08 15:29:14.155543 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:29:14.155554 | orchestrator | 2025-10-08 15:29:14.155565 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2025-10-08 15:29:14.155596 | orchestrator | Wednesday 08 October 2025 15:29:12 +0000 (0:00:02.083) 0:00:46.669 ***** 2025-10-08 15:29:14.155607 | orchestrator | skipping: [testbed-manager] 2025-10-08 15:29:14.155618 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:29:14.155629 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:29:14.155640 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:29:14.155651 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:29:14.155661 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:29:14.155672 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:29:14.155682 | orchestrator | 2025-10-08 15:29:14.155693 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2025-10-08 15:29:14.155704 | orchestrator | Wednesday 08 October 2025 15:29:13 +0000 (0:00:00.628) 0:00:47.297 ***** 2025-10-08 15:29:14.155714 | orchestrator | skipping: [testbed-manager] 2025-10-08 15:29:14.155725 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:29:14.155745 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:29:14.155756 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:29:14.155766 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:29:14.155777 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:29:14.155788 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:29:14.155798 | orchestrator | 2025-10-08 15:29:14.155809 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-08 15:29:14.155821 | orchestrator | testbed-manager : ok=21  changed=5  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-10-08 15:29:14.155833 | orchestrator | testbed-node-0 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-10-08 15:29:14.155844 | orchestrator | testbed-node-1 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-10-08 15:29:14.155855 | orchestrator | testbed-node-2 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-10-08 15:29:14.155865 | orchestrator | testbed-node-3 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-10-08 15:29:14.155876 | orchestrator | testbed-node-4 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-10-08 15:29:14.155886 | orchestrator | testbed-node-5 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-10-08 15:29:14.155898 | orchestrator | 2025-10-08 15:29:14.155909 | orchestrator | 2025-10-08 15:29:14.155920 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-08 15:29:14.155931 | orchestrator | Wednesday 08 October 2025 15:29:13 +0000 (0:00:00.739) 0:00:48.037 ***** 2025-10-08 15:29:14.155941 | orchestrator | =============================================================================== 2025-10-08 15:29:14.155952 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 5.80s 2025-10-08 15:29:14.155968 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 5.30s 2025-10-08 15:29:14.155979 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.74s 2025-10-08 15:29:14.155990 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.30s 2025-10-08 15:29:14.156000 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.26s 2025-10-08 15:29:14.156033 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 2.08s 2025-10-08 15:29:14.156044 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.01s 2025-10-08 15:29:14.156054 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.81s 2025-10-08 15:29:14.156065 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.72s 2025-10-08 15:29:14.156076 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.69s 2025-10-08 15:29:14.156086 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.59s 2025-10-08 15:29:14.156097 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.57s 2025-10-08 15:29:14.156108 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.43s 2025-10-08 15:29:14.156118 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.31s 2025-10-08 15:29:14.156129 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.30s 2025-10-08 15:29:14.156139 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.21s 2025-10-08 15:29:14.156150 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.19s 2025-10-08 15:29:14.156168 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.14s 2025-10-08 15:29:14.156179 | orchestrator | osism.commons.network : Create required directories --------------------- 1.03s 2025-10-08 15:29:14.156189 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 0.93s 2025-10-08 15:29:14.463697 | orchestrator | + osism apply wireguard 2025-10-08 15:29:26.605961 | orchestrator | 2025-10-08 15:29:26 | INFO  | Task e607493f-8c2e-4b8f-87d9-a6e27d65a1bb (wireguard) was prepared for execution. 2025-10-08 15:29:26.606164 | orchestrator | 2025-10-08 15:29:26 | INFO  | It takes a moment until task e607493f-8c2e-4b8f-87d9-a6e27d65a1bb (wireguard) has been started and output is visible here. 2025-10-08 15:29:46.987665 | orchestrator | 2025-10-08 15:29:46.987780 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2025-10-08 15:29:46.987796 | orchestrator | 2025-10-08 15:29:46.987809 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2025-10-08 15:29:46.987821 | orchestrator | Wednesday 08 October 2025 15:29:31 +0000 (0:00:00.263) 0:00:00.263 ***** 2025-10-08 15:29:46.987832 | orchestrator | ok: [testbed-manager] 2025-10-08 15:29:46.987844 | orchestrator | 2025-10-08 15:29:46.987855 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2025-10-08 15:29:46.987866 | orchestrator | Wednesday 08 October 2025 15:29:32 +0000 (0:00:01.583) 0:00:01.847 ***** 2025-10-08 15:29:46.987877 | orchestrator | changed: [testbed-manager] 2025-10-08 15:29:46.987888 | orchestrator | 2025-10-08 15:29:46.987899 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2025-10-08 15:29:46.987910 | orchestrator | Wednesday 08 October 2025 15:29:39 +0000 (0:00:06.532) 0:00:08.380 ***** 2025-10-08 15:29:46.987921 | orchestrator | changed: [testbed-manager] 2025-10-08 15:29:46.987931 | orchestrator | 2025-10-08 15:29:46.987942 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2025-10-08 15:29:46.987953 | orchestrator | Wednesday 08 October 2025 15:29:39 +0000 (0:00:00.610) 0:00:08.990 ***** 2025-10-08 15:29:46.987984 | orchestrator | changed: [testbed-manager] 2025-10-08 15:29:46.988005 | orchestrator | 2025-10-08 15:29:46.988054 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2025-10-08 15:29:46.988066 | orchestrator | Wednesday 08 October 2025 15:29:40 +0000 (0:00:00.429) 0:00:09.420 ***** 2025-10-08 15:29:46.988077 | orchestrator | ok: [testbed-manager] 2025-10-08 15:29:46.988088 | orchestrator | 2025-10-08 15:29:46.988099 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2025-10-08 15:29:46.988110 | orchestrator | Wednesday 08 October 2025 15:29:40 +0000 (0:00:00.698) 0:00:10.118 ***** 2025-10-08 15:29:46.988121 | orchestrator | ok: [testbed-manager] 2025-10-08 15:29:46.988132 | orchestrator | 2025-10-08 15:29:46.988143 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2025-10-08 15:29:46.988154 | orchestrator | Wednesday 08 October 2025 15:29:41 +0000 (0:00:00.429) 0:00:10.548 ***** 2025-10-08 15:29:46.988164 | orchestrator | ok: [testbed-manager] 2025-10-08 15:29:46.988175 | orchestrator | 2025-10-08 15:29:46.988186 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2025-10-08 15:29:46.988197 | orchestrator | Wednesday 08 October 2025 15:29:41 +0000 (0:00:00.418) 0:00:10.967 ***** 2025-10-08 15:29:46.988209 | orchestrator | changed: [testbed-manager] 2025-10-08 15:29:46.988221 | orchestrator | 2025-10-08 15:29:46.988233 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2025-10-08 15:29:46.988245 | orchestrator | Wednesday 08 October 2025 15:29:42 +0000 (0:00:01.170) 0:00:12.137 ***** 2025-10-08 15:29:46.988258 | orchestrator | changed: [testbed-manager] => (item=None) 2025-10-08 15:29:46.988271 | orchestrator | changed: [testbed-manager] 2025-10-08 15:29:46.988283 | orchestrator | 2025-10-08 15:29:46.988295 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2025-10-08 15:29:46.988307 | orchestrator | Wednesday 08 October 2025 15:29:43 +0000 (0:00:00.960) 0:00:13.098 ***** 2025-10-08 15:29:46.988319 | orchestrator | changed: [testbed-manager] 2025-10-08 15:29:46.988359 | orchestrator | 2025-10-08 15:29:46.988371 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2025-10-08 15:29:46.988400 | orchestrator | Wednesday 08 October 2025 15:29:45 +0000 (0:00:01.736) 0:00:14.834 ***** 2025-10-08 15:29:46.988413 | orchestrator | changed: [testbed-manager] 2025-10-08 15:29:46.988424 | orchestrator | 2025-10-08 15:29:46.988436 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-08 15:29:46.988449 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-08 15:29:46.988462 | orchestrator | 2025-10-08 15:29:46.988474 | orchestrator | 2025-10-08 15:29:46.988486 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-08 15:29:46.988498 | orchestrator | Wednesday 08 October 2025 15:29:46 +0000 (0:00:00.982) 0:00:15.817 ***** 2025-10-08 15:29:46.988510 | orchestrator | =============================================================================== 2025-10-08 15:29:46.988522 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 6.53s 2025-10-08 15:29:46.988534 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.74s 2025-10-08 15:29:46.988547 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.58s 2025-10-08 15:29:46.988559 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.17s 2025-10-08 15:29:46.988570 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.98s 2025-10-08 15:29:46.988581 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.96s 2025-10-08 15:29:46.988591 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.70s 2025-10-08 15:29:46.988602 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.61s 2025-10-08 15:29:46.988612 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.43s 2025-10-08 15:29:46.988623 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.43s 2025-10-08 15:29:46.988634 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.42s 2025-10-08 15:29:47.320233 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2025-10-08 15:29:47.352683 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2025-10-08 15:29:47.352730 | orchestrator | Dload Upload Total Spent Left Speed 2025-10-08 15:29:47.424762 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 15 100 15 0 0 207 0 --:--:-- --:--:-- --:--:-- 208 2025-10-08 15:29:47.437647 | orchestrator | + osism apply --environment custom workarounds 2025-10-08 15:29:49.464721 | orchestrator | 2025-10-08 15:29:49 | INFO  | Trying to run play workarounds in environment custom 2025-10-08 15:29:59.668913 | orchestrator | 2025-10-08 15:29:59 | INFO  | Task 7d15d3af-36d8-48a7-b7ad-a4ca3413cab4 (workarounds) was prepared for execution. 2025-10-08 15:29:59.669083 | orchestrator | 2025-10-08 15:29:59 | INFO  | It takes a moment until task 7d15d3af-36d8-48a7-b7ad-a4ca3413cab4 (workarounds) has been started and output is visible here. 2025-10-08 15:30:25.493175 | orchestrator | 2025-10-08 15:30:25.493292 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-10-08 15:30:25.493310 | orchestrator | 2025-10-08 15:30:25.493323 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2025-10-08 15:30:25.493335 | orchestrator | Wednesday 08 October 2025 15:30:03 +0000 (0:00:00.178) 0:00:00.178 ***** 2025-10-08 15:30:25.493346 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2025-10-08 15:30:25.493357 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2025-10-08 15:30:25.493368 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2025-10-08 15:30:25.493379 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2025-10-08 15:30:25.493414 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2025-10-08 15:30:25.493426 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2025-10-08 15:30:25.493437 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2025-10-08 15:30:25.493448 | orchestrator | 2025-10-08 15:30:25.493458 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2025-10-08 15:30:25.493469 | orchestrator | 2025-10-08 15:30:25.493480 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-10-08 15:30:25.493490 | orchestrator | Wednesday 08 October 2025 15:30:04 +0000 (0:00:00.777) 0:00:00.955 ***** 2025-10-08 15:30:25.493501 | orchestrator | ok: [testbed-manager] 2025-10-08 15:30:25.493513 | orchestrator | 2025-10-08 15:30:25.493524 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2025-10-08 15:30:25.493534 | orchestrator | 2025-10-08 15:30:25.493545 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-10-08 15:30:25.493556 | orchestrator | Wednesday 08 October 2025 15:30:07 +0000 (0:00:02.421) 0:00:03.377 ***** 2025-10-08 15:30:25.493566 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:30:25.493577 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:30:25.493588 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:30:25.493599 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:30:25.493609 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:30:25.493620 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:30:25.493630 | orchestrator | 2025-10-08 15:30:25.493641 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2025-10-08 15:30:25.493651 | orchestrator | 2025-10-08 15:30:25.493662 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2025-10-08 15:30:25.493673 | orchestrator | Wednesday 08 October 2025 15:30:09 +0000 (0:00:01.911) 0:00:05.288 ***** 2025-10-08 15:30:25.493685 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-10-08 15:30:25.493698 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-10-08 15:30:25.493709 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-10-08 15:30:25.493721 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-10-08 15:30:25.493733 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-10-08 15:30:25.493746 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-10-08 15:30:25.493758 | orchestrator | 2025-10-08 15:30:25.493770 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2025-10-08 15:30:25.493782 | orchestrator | Wednesday 08 October 2025 15:30:10 +0000 (0:00:01.505) 0:00:06.794 ***** 2025-10-08 15:30:25.493794 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:30:25.493806 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:30:25.493818 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:30:25.493830 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:30:25.493841 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:30:25.493853 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:30:25.493865 | orchestrator | 2025-10-08 15:30:25.493877 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2025-10-08 15:30:25.493889 | orchestrator | Wednesday 08 October 2025 15:30:14 +0000 (0:00:03.846) 0:00:10.640 ***** 2025-10-08 15:30:25.493901 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:30:25.493912 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:30:25.493924 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:30:25.493936 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:30:25.493948 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:30:25.493960 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:30:25.493979 | orchestrator | 2025-10-08 15:30:25.493991 | orchestrator | PLAY [Add a workaround service] ************************************************ 2025-10-08 15:30:25.494003 | orchestrator | 2025-10-08 15:30:25.494077 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2025-10-08 15:30:25.494093 | orchestrator | Wednesday 08 October 2025 15:30:15 +0000 (0:00:00.724) 0:00:11.365 ***** 2025-10-08 15:30:25.494106 | orchestrator | changed: [testbed-manager] 2025-10-08 15:30:25.494117 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:30:25.494128 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:30:25.494139 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:30:25.494149 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:30:25.494160 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:30:25.494171 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:30:25.494181 | orchestrator | 2025-10-08 15:30:25.494192 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2025-10-08 15:30:25.494203 | orchestrator | Wednesday 08 October 2025 15:30:16 +0000 (0:00:01.701) 0:00:13.066 ***** 2025-10-08 15:30:25.494214 | orchestrator | changed: [testbed-manager] 2025-10-08 15:30:25.494225 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:30:25.494235 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:30:25.494246 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:30:25.494257 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:30:25.494267 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:30:25.494300 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:30:25.494312 | orchestrator | 2025-10-08 15:30:25.494323 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2025-10-08 15:30:25.494334 | orchestrator | Wednesday 08 October 2025 15:30:18 +0000 (0:00:01.605) 0:00:14.671 ***** 2025-10-08 15:30:25.494345 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:30:25.494355 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:30:25.494366 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:30:25.494377 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:30:25.494387 | orchestrator | ok: [testbed-manager] 2025-10-08 15:30:25.494398 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:30:25.494409 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:30:25.494419 | orchestrator | 2025-10-08 15:30:25.494430 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2025-10-08 15:30:25.494458 | orchestrator | Wednesday 08 October 2025 15:30:19 +0000 (0:00:01.447) 0:00:16.119 ***** 2025-10-08 15:30:25.494469 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:30:25.494480 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:30:25.494490 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:30:25.494501 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:30:25.494512 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:30:25.494522 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:30:25.494533 | orchestrator | changed: [testbed-manager] 2025-10-08 15:30:25.494543 | orchestrator | 2025-10-08 15:30:25.494554 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2025-10-08 15:30:25.494565 | orchestrator | Wednesday 08 October 2025 15:30:22 +0000 (0:00:02.156) 0:00:18.275 ***** 2025-10-08 15:30:25.494575 | orchestrator | skipping: [testbed-manager] 2025-10-08 15:30:25.494586 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:30:25.494597 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:30:25.494607 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:30:25.494618 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:30:25.494628 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:30:25.494639 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:30:25.494649 | orchestrator | 2025-10-08 15:30:25.494660 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2025-10-08 15:30:25.494671 | orchestrator | 2025-10-08 15:30:25.494682 | orchestrator | TASK [Install python3-docker] ************************************************** 2025-10-08 15:30:25.494692 | orchestrator | Wednesday 08 October 2025 15:30:22 +0000 (0:00:00.638) 0:00:18.914 ***** 2025-10-08 15:30:25.494713 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:30:25.494724 | orchestrator | ok: [testbed-manager] 2025-10-08 15:30:25.494740 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:30:25.494751 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:30:25.494762 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:30:25.494772 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:30:25.494783 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:30:25.494793 | orchestrator | 2025-10-08 15:30:25.494804 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-08 15:30:25.494821 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-10-08 15:30:25.494833 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-08 15:30:25.494844 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-08 15:30:25.494855 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-08 15:30:25.494866 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-08 15:30:25.494876 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-08 15:30:25.494887 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-08 15:30:25.494897 | orchestrator | 2025-10-08 15:30:25.494908 | orchestrator | 2025-10-08 15:30:25.494919 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-08 15:30:25.494929 | orchestrator | Wednesday 08 October 2025 15:30:25 +0000 (0:00:02.794) 0:00:21.709 ***** 2025-10-08 15:30:25.494940 | orchestrator | =============================================================================== 2025-10-08 15:30:25.494951 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.85s 2025-10-08 15:30:25.494961 | orchestrator | Install python3-docker -------------------------------------------------- 2.80s 2025-10-08 15:30:25.494972 | orchestrator | Apply netplan configuration --------------------------------------------- 2.42s 2025-10-08 15:30:25.494983 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 2.16s 2025-10-08 15:30:25.494993 | orchestrator | Apply netplan configuration --------------------------------------------- 1.91s 2025-10-08 15:30:25.495003 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.70s 2025-10-08 15:30:25.495014 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.61s 2025-10-08 15:30:25.495054 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.51s 2025-10-08 15:30:25.495065 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.45s 2025-10-08 15:30:25.495076 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.78s 2025-10-08 15:30:25.495087 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.72s 2025-10-08 15:30:25.495104 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.64s 2025-10-08 15:30:26.176278 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2025-10-08 15:30:38.314847 | orchestrator | 2025-10-08 15:30:38 | INFO  | Task 0b4a76c9-482a-4ec1-9d84-151790a2223b (reboot) was prepared for execution. 2025-10-08 15:30:38.314962 | orchestrator | 2025-10-08 15:30:38 | INFO  | It takes a moment until task 0b4a76c9-482a-4ec1-9d84-151790a2223b (reboot) has been started and output is visible here. 2025-10-08 15:30:48.680135 | orchestrator | 2025-10-08 15:30:48.680271 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-10-08 15:30:48.680288 | orchestrator | 2025-10-08 15:30:48.680300 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-10-08 15:30:48.680311 | orchestrator | Wednesday 08 October 2025 15:30:42 +0000 (0:00:00.205) 0:00:00.205 ***** 2025-10-08 15:30:48.680323 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:30:48.680334 | orchestrator | 2025-10-08 15:30:48.680345 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-10-08 15:30:48.680356 | orchestrator | Wednesday 08 October 2025 15:30:42 +0000 (0:00:00.102) 0:00:00.308 ***** 2025-10-08 15:30:48.680367 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:30:48.680378 | orchestrator | 2025-10-08 15:30:48.680389 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-10-08 15:30:48.680400 | orchestrator | Wednesday 08 October 2025 15:30:43 +0000 (0:00:00.952) 0:00:01.261 ***** 2025-10-08 15:30:48.680411 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:30:48.680421 | orchestrator | 2025-10-08 15:30:48.680432 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-10-08 15:30:48.680443 | orchestrator | 2025-10-08 15:30:48.680454 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-10-08 15:30:48.680464 | orchestrator | Wednesday 08 October 2025 15:30:43 +0000 (0:00:00.122) 0:00:01.383 ***** 2025-10-08 15:30:48.680475 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:30:48.680486 | orchestrator | 2025-10-08 15:30:48.680497 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-10-08 15:30:48.680507 | orchestrator | Wednesday 08 October 2025 15:30:43 +0000 (0:00:00.097) 0:00:01.480 ***** 2025-10-08 15:30:48.680518 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:30:48.680529 | orchestrator | 2025-10-08 15:30:48.680539 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-10-08 15:30:48.680550 | orchestrator | Wednesday 08 October 2025 15:30:44 +0000 (0:00:00.701) 0:00:02.182 ***** 2025-10-08 15:30:48.680561 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:30:48.680572 | orchestrator | 2025-10-08 15:30:48.680597 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-10-08 15:30:48.680609 | orchestrator | 2025-10-08 15:30:48.680621 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-10-08 15:30:48.680634 | orchestrator | Wednesday 08 October 2025 15:30:44 +0000 (0:00:00.123) 0:00:02.306 ***** 2025-10-08 15:30:48.680647 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:30:48.680661 | orchestrator | 2025-10-08 15:30:48.680673 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-10-08 15:30:48.680687 | orchestrator | Wednesday 08 October 2025 15:30:44 +0000 (0:00:00.207) 0:00:02.513 ***** 2025-10-08 15:30:48.680700 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:30:48.680713 | orchestrator | 2025-10-08 15:30:48.680726 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-10-08 15:30:48.680739 | orchestrator | Wednesday 08 October 2025 15:30:45 +0000 (0:00:00.685) 0:00:03.199 ***** 2025-10-08 15:30:48.680752 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:30:48.680765 | orchestrator | 2025-10-08 15:30:48.680778 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-10-08 15:30:48.680791 | orchestrator | 2025-10-08 15:30:48.680804 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-10-08 15:30:48.680817 | orchestrator | Wednesday 08 October 2025 15:30:45 +0000 (0:00:00.131) 0:00:03.331 ***** 2025-10-08 15:30:48.680830 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:30:48.680843 | orchestrator | 2025-10-08 15:30:48.680856 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-10-08 15:30:48.680869 | orchestrator | Wednesday 08 October 2025 15:30:45 +0000 (0:00:00.127) 0:00:03.458 ***** 2025-10-08 15:30:48.680883 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:30:48.680896 | orchestrator | 2025-10-08 15:30:48.680909 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-10-08 15:30:48.680933 | orchestrator | Wednesday 08 October 2025 15:30:46 +0000 (0:00:00.697) 0:00:04.156 ***** 2025-10-08 15:30:48.680946 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:30:48.680959 | orchestrator | 2025-10-08 15:30:48.680971 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-10-08 15:30:48.680983 | orchestrator | 2025-10-08 15:30:48.680994 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-10-08 15:30:48.681005 | orchestrator | Wednesday 08 October 2025 15:30:46 +0000 (0:00:00.177) 0:00:04.334 ***** 2025-10-08 15:30:48.681016 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:30:48.681027 | orchestrator | 2025-10-08 15:30:48.681060 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-10-08 15:30:48.681071 | orchestrator | Wednesday 08 October 2025 15:30:46 +0000 (0:00:00.107) 0:00:04.442 ***** 2025-10-08 15:30:48.681082 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:30:48.681093 | orchestrator | 2025-10-08 15:30:48.681104 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-10-08 15:30:48.681115 | orchestrator | Wednesday 08 October 2025 15:30:47 +0000 (0:00:00.637) 0:00:05.079 ***** 2025-10-08 15:30:48.681125 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:30:48.681136 | orchestrator | 2025-10-08 15:30:48.681147 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-10-08 15:30:48.681158 | orchestrator | 2025-10-08 15:30:48.681169 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-10-08 15:30:48.681179 | orchestrator | Wednesday 08 October 2025 15:30:47 +0000 (0:00:00.138) 0:00:05.218 ***** 2025-10-08 15:30:48.681190 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:30:48.681201 | orchestrator | 2025-10-08 15:30:48.681212 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-10-08 15:30:48.681223 | orchestrator | Wednesday 08 October 2025 15:30:47 +0000 (0:00:00.108) 0:00:05.327 ***** 2025-10-08 15:30:48.681233 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:30:48.681244 | orchestrator | 2025-10-08 15:30:48.681255 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-10-08 15:30:48.681266 | orchestrator | Wednesday 08 October 2025 15:30:48 +0000 (0:00:00.682) 0:00:06.009 ***** 2025-10-08 15:30:48.681296 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:30:48.681308 | orchestrator | 2025-10-08 15:30:48.681318 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-08 15:30:48.681330 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-08 15:30:48.681342 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-08 15:30:48.681352 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-08 15:30:48.681363 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-08 15:30:48.681374 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-08 15:30:48.681384 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-08 15:30:48.681395 | orchestrator | 2025-10-08 15:30:48.681406 | orchestrator | 2025-10-08 15:30:48.681417 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-08 15:30:48.681428 | orchestrator | Wednesday 08 October 2025 15:30:48 +0000 (0:00:00.043) 0:00:06.053 ***** 2025-10-08 15:30:48.681438 | orchestrator | =============================================================================== 2025-10-08 15:30:48.681457 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.36s 2025-10-08 15:30:48.681473 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.75s 2025-10-08 15:30:48.681485 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.74s 2025-10-08 15:30:49.133492 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2025-10-08 15:31:01.322520 | orchestrator | 2025-10-08 15:31:01 | INFO  | Task 5e30a54c-95c8-493e-8f34-a4f6d3bb1fe4 (wait-for-connection) was prepared for execution. 2025-10-08 15:31:01.322633 | orchestrator | 2025-10-08 15:31:01 | INFO  | It takes a moment until task 5e30a54c-95c8-493e-8f34-a4f6d3bb1fe4 (wait-for-connection) has been started and output is visible here. 2025-10-08 15:31:17.581133 | orchestrator | 2025-10-08 15:31:17.581243 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2025-10-08 15:31:17.581259 | orchestrator | 2025-10-08 15:31:17.581272 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2025-10-08 15:31:17.581283 | orchestrator | Wednesday 08 October 2025 15:31:05 +0000 (0:00:00.262) 0:00:00.262 ***** 2025-10-08 15:31:17.581294 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:31:17.581306 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:31:17.581317 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:31:17.581328 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:31:17.581339 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:31:17.581350 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:31:17.581360 | orchestrator | 2025-10-08 15:31:17.581371 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-08 15:31:17.581383 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-08 15:31:17.581396 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-08 15:31:17.581407 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-08 15:31:17.581418 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-08 15:31:17.581429 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-08 15:31:17.581439 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-08 15:31:17.581450 | orchestrator | 2025-10-08 15:31:17.581461 | orchestrator | 2025-10-08 15:31:17.581472 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-08 15:31:17.581483 | orchestrator | Wednesday 08 October 2025 15:31:17 +0000 (0:00:11.661) 0:00:11.923 ***** 2025-10-08 15:31:17.581494 | orchestrator | =============================================================================== 2025-10-08 15:31:17.581505 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.66s 2025-10-08 15:31:17.900358 | orchestrator | + osism apply hddtemp 2025-10-08 15:31:30.002553 | orchestrator | 2025-10-08 15:31:29 | INFO  | Task fac376f9-3d7f-408e-ae66-4d1315cdbdf4 (hddtemp) was prepared for execution. 2025-10-08 15:31:30.002642 | orchestrator | 2025-10-08 15:31:29 | INFO  | It takes a moment until task fac376f9-3d7f-408e-ae66-4d1315cdbdf4 (hddtemp) has been started and output is visible here. 2025-10-08 15:31:57.487926 | orchestrator | 2025-10-08 15:31:57.488013 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2025-10-08 15:31:57.488025 | orchestrator | 2025-10-08 15:31:57.488033 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2025-10-08 15:31:57.488042 | orchestrator | Wednesday 08 October 2025 15:31:34 +0000 (0:00:00.250) 0:00:00.250 ***** 2025-10-08 15:31:57.488089 | orchestrator | ok: [testbed-manager] 2025-10-08 15:31:57.488121 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:31:57.488129 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:31:57.488136 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:31:57.488143 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:31:57.488150 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:31:57.488157 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:31:57.488164 | orchestrator | 2025-10-08 15:31:57.488171 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2025-10-08 15:31:57.488179 | orchestrator | Wednesday 08 October 2025 15:31:34 +0000 (0:00:00.709) 0:00:00.959 ***** 2025-10-08 15:31:57.488188 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-08 15:31:57.488197 | orchestrator | 2025-10-08 15:31:57.488205 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2025-10-08 15:31:57.488212 | orchestrator | Wednesday 08 October 2025 15:31:36 +0000 (0:00:01.214) 0:00:02.174 ***** 2025-10-08 15:31:57.488219 | orchestrator | ok: [testbed-manager] 2025-10-08 15:31:57.488226 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:31:57.488233 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:31:57.488240 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:31:57.488247 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:31:57.488254 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:31:57.488261 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:31:57.488268 | orchestrator | 2025-10-08 15:31:57.488275 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2025-10-08 15:31:57.488283 | orchestrator | Wednesday 08 October 2025 15:31:38 +0000 (0:00:02.062) 0:00:04.236 ***** 2025-10-08 15:31:57.488290 | orchestrator | changed: [testbed-manager] 2025-10-08 15:31:57.488297 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:31:57.488316 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:31:57.488323 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:31:57.488331 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:31:57.488338 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:31:57.488345 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:31:57.488352 | orchestrator | 2025-10-08 15:31:57.488359 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2025-10-08 15:31:57.488366 | orchestrator | Wednesday 08 October 2025 15:31:39 +0000 (0:00:01.203) 0:00:05.440 ***** 2025-10-08 15:31:57.488374 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:31:57.488381 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:31:57.488388 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:31:57.488395 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:31:57.488402 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:31:57.488409 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:31:57.488416 | orchestrator | ok: [testbed-manager] 2025-10-08 15:31:57.488423 | orchestrator | 2025-10-08 15:31:57.488430 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2025-10-08 15:31:57.488437 | orchestrator | Wednesday 08 October 2025 15:31:40 +0000 (0:00:01.217) 0:00:06.657 ***** 2025-10-08 15:31:57.488445 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:31:57.488452 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:31:57.488459 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:31:57.488466 | orchestrator | changed: [testbed-manager] 2025-10-08 15:31:57.488473 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:31:57.488480 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:31:57.488489 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:31:57.488496 | orchestrator | 2025-10-08 15:31:57.488505 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2025-10-08 15:31:57.488513 | orchestrator | Wednesday 08 October 2025 15:31:41 +0000 (0:00:00.736) 0:00:07.394 ***** 2025-10-08 15:31:57.488520 | orchestrator | changed: [testbed-manager] 2025-10-08 15:31:57.488529 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:31:57.488536 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:31:57.488551 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:31:57.488560 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:31:57.488567 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:31:57.488575 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:31:57.488583 | orchestrator | 2025-10-08 15:31:57.488591 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2025-10-08 15:31:57.488600 | orchestrator | Wednesday 08 October 2025 15:31:53 +0000 (0:00:12.543) 0:00:19.937 ***** 2025-10-08 15:31:57.488608 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-08 15:31:57.488616 | orchestrator | 2025-10-08 15:31:57.488624 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2025-10-08 15:31:57.488632 | orchestrator | Wednesday 08 October 2025 15:31:55 +0000 (0:00:01.244) 0:00:21.181 ***** 2025-10-08 15:31:57.488640 | orchestrator | changed: [testbed-manager] 2025-10-08 15:31:57.488648 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:31:57.488656 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:31:57.488664 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:31:57.488673 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:31:57.488681 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:31:57.488688 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:31:57.488696 | orchestrator | 2025-10-08 15:31:57.488704 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-08 15:31:57.488713 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-08 15:31:57.488735 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-10-08 15:31:57.488745 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-10-08 15:31:57.488753 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-10-08 15:31:57.488762 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-10-08 15:31:57.488770 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-10-08 15:31:57.488778 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-10-08 15:31:57.488786 | orchestrator | 2025-10-08 15:31:57.488794 | orchestrator | 2025-10-08 15:31:57.488802 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-08 15:31:57.488810 | orchestrator | Wednesday 08 October 2025 15:31:57 +0000 (0:00:01.881) 0:00:23.063 ***** 2025-10-08 15:31:57.488819 | orchestrator | =============================================================================== 2025-10-08 15:31:57.488827 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 12.54s 2025-10-08 15:31:57.488835 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 2.06s 2025-10-08 15:31:57.488843 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.88s 2025-10-08 15:31:57.488850 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.24s 2025-10-08 15:31:57.488861 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.22s 2025-10-08 15:31:57.488868 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.21s 2025-10-08 15:31:57.488875 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.20s 2025-10-08 15:31:57.488887 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.74s 2025-10-08 15:31:57.488894 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.71s 2025-10-08 15:31:57.791958 | orchestrator | ++ semver latest 7.1.1 2025-10-08 15:31:57.859509 | orchestrator | + [[ -1 -ge 0 ]] 2025-10-08 15:31:57.859582 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-10-08 15:31:57.859597 | orchestrator | + sudo systemctl restart manager.service 2025-10-08 15:32:11.905914 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-10-08 15:32:11.906103 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-10-08 15:32:11.906124 | orchestrator | + local max_attempts=60 2025-10-08 15:32:11.906137 | orchestrator | + local name=ceph-ansible 2025-10-08 15:32:11.906149 | orchestrator | + local attempt_num=1 2025-10-08 15:32:11.906161 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-10-08 15:32:11.933033 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-10-08 15:32:11.933087 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-10-08 15:32:11.933099 | orchestrator | + sleep 5 2025-10-08 15:32:16.937420 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-10-08 15:32:16.976467 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-10-08 15:32:16.976548 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-10-08 15:32:16.976561 | orchestrator | + sleep 5 2025-10-08 15:32:21.979860 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-10-08 15:32:22.013177 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-10-08 15:32:22.013328 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-10-08 15:32:22.013346 | orchestrator | + sleep 5 2025-10-08 15:32:27.018533 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-10-08 15:32:27.061000 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-10-08 15:32:27.061279 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-10-08 15:32:27.061305 | orchestrator | + sleep 5 2025-10-08 15:32:32.066508 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-10-08 15:32:32.104600 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-10-08 15:32:32.104641 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-10-08 15:32:32.104646 | orchestrator | + sleep 5 2025-10-08 15:32:37.109833 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-10-08 15:32:37.154880 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-10-08 15:32:37.154924 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-10-08 15:32:37.154930 | orchestrator | + sleep 5 2025-10-08 15:32:42.159966 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-10-08 15:32:42.199263 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-10-08 15:32:42.199273 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-10-08 15:32:42.199277 | orchestrator | + sleep 5 2025-10-08 15:32:47.206519 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-10-08 15:32:47.269964 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-10-08 15:32:47.270010 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-10-08 15:32:47.270044 | orchestrator | + sleep 5 2025-10-08 15:32:52.274886 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-10-08 15:32:52.296846 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-10-08 15:32:52.297327 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-10-08 15:32:52.297417 | orchestrator | + sleep 5 2025-10-08 15:32:57.300579 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-10-08 15:32:57.332698 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-10-08 15:32:57.332734 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-10-08 15:32:57.332748 | orchestrator | + sleep 5 2025-10-08 15:33:02.337375 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-10-08 15:33:02.379090 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-10-08 15:33:02.379178 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-10-08 15:33:02.379195 | orchestrator | + sleep 5 2025-10-08 15:33:07.385027 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-10-08 15:33:07.424537 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-10-08 15:33:07.424603 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-10-08 15:33:07.424617 | orchestrator | + sleep 5 2025-10-08 15:33:12.430486 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-10-08 15:33:12.470462 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-10-08 15:33:12.470519 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-10-08 15:33:12.470533 | orchestrator | + sleep 5 2025-10-08 15:33:17.476259 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-10-08 15:33:17.515005 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-10-08 15:33:17.515097 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-10-08 15:33:17.515114 | orchestrator | + local max_attempts=60 2025-10-08 15:33:17.515126 | orchestrator | + local name=kolla-ansible 2025-10-08 15:33:17.515139 | orchestrator | + local attempt_num=1 2025-10-08 15:33:17.515592 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-10-08 15:33:17.546879 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-10-08 15:33:17.546913 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-10-08 15:33:17.546924 | orchestrator | + local max_attempts=60 2025-10-08 15:33:17.547722 | orchestrator | + local name=osism-ansible 2025-10-08 15:33:17.547739 | orchestrator | + local attempt_num=1 2025-10-08 15:33:17.548085 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-10-08 15:33:17.586997 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-10-08 15:33:17.587016 | orchestrator | + [[ true == \t\r\u\e ]] 2025-10-08 15:33:17.587025 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-10-08 15:33:17.769895 | orchestrator | ARA in ceph-ansible already disabled. 2025-10-08 15:33:17.927726 | orchestrator | ARA in kolla-ansible already disabled. 2025-10-08 15:33:18.077528 | orchestrator | ARA in osism-ansible already disabled. 2025-10-08 15:33:18.239150 | orchestrator | ARA in osism-kubernetes already disabled. 2025-10-08 15:33:18.240418 | orchestrator | + osism apply gather-facts 2025-10-08 15:33:30.315038 | orchestrator | 2025-10-08 15:33:30 | INFO  | Task 3426941c-1aad-4b31-804d-726679cad4e0 (gather-facts) was prepared for execution. 2025-10-08 15:33:30.315173 | orchestrator | 2025-10-08 15:33:30 | INFO  | It takes a moment until task 3426941c-1aad-4b31-804d-726679cad4e0 (gather-facts) has been started and output is visible here. 2025-10-08 15:33:43.327964 | orchestrator | 2025-10-08 15:33:43.328102 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-10-08 15:33:43.328120 | orchestrator | 2025-10-08 15:33:43.328132 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-10-08 15:33:43.328143 | orchestrator | Wednesday 08 October 2025 15:33:34 +0000 (0:00:00.194) 0:00:00.194 ***** 2025-10-08 15:33:43.328155 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:33:43.328167 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:33:43.328195 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:33:43.328207 | orchestrator | ok: [testbed-manager] 2025-10-08 15:33:43.328218 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:33:43.328229 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:33:43.328239 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:33:43.328250 | orchestrator | 2025-10-08 15:33:43.328261 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-10-08 15:33:43.328272 | orchestrator | 2025-10-08 15:33:43.328283 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-10-08 15:33:43.328294 | orchestrator | Wednesday 08 October 2025 15:33:42 +0000 (0:00:08.098) 0:00:08.292 ***** 2025-10-08 15:33:43.328307 | orchestrator | skipping: [testbed-manager] 2025-10-08 15:33:43.328319 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:33:43.328330 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:33:43.328341 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:33:43.328351 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:33:43.328362 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:33:43.328373 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:33:43.328384 | orchestrator | 2025-10-08 15:33:43.328395 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-08 15:33:43.328406 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-10-08 15:33:43.328418 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-10-08 15:33:43.328451 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-10-08 15:33:43.328462 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-10-08 15:33:43.328473 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-10-08 15:33:43.328484 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-10-08 15:33:43.328495 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-10-08 15:33:43.328505 | orchestrator | 2025-10-08 15:33:43.328518 | orchestrator | 2025-10-08 15:33:43.328530 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-08 15:33:43.328542 | orchestrator | Wednesday 08 October 2025 15:33:42 +0000 (0:00:00.509) 0:00:08.802 ***** 2025-10-08 15:33:43.328554 | orchestrator | =============================================================================== 2025-10-08 15:33:43.328566 | orchestrator | Gathers facts about hosts ----------------------------------------------- 8.10s 2025-10-08 15:33:43.328578 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.51s 2025-10-08 15:33:43.658129 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2025-10-08 15:33:43.678998 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2025-10-08 15:33:43.694303 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2025-10-08 15:33:43.707945 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2025-10-08 15:33:43.732909 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2025-10-08 15:33:43.747112 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2025-10-08 15:33:43.759443 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2025-10-08 15:33:43.771938 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2025-10-08 15:33:43.784809 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2025-10-08 15:33:43.798670 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2025-10-08 15:33:43.812346 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2025-10-08 15:33:43.823381 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2025-10-08 15:33:43.846608 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2025-10-08 15:33:43.864715 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2025-10-08 15:33:43.879468 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2025-10-08 15:33:43.890971 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2025-10-08 15:33:43.902664 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2025-10-08 15:33:43.916759 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2025-10-08 15:33:43.929585 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2025-10-08 15:33:43.942808 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2025-10-08 15:33:43.962273 | orchestrator | + [[ false == \t\r\u\e ]] 2025-10-08 15:33:44.067849 | orchestrator | ok: Runtime: 0:23:38.202738 2025-10-08 15:33:44.152369 | 2025-10-08 15:33:44.152504 | TASK [Deploy services] 2025-10-08 15:33:44.684800 | orchestrator | skipping: Conditional result was False 2025-10-08 15:33:44.704776 | 2025-10-08 15:33:44.704949 | TASK [Deploy in a nutshell] 2025-10-08 15:33:45.397029 | orchestrator | 2025-10-08 15:33:45.397240 | orchestrator | # PULL IMAGES 2025-10-08 15:33:45.397264 | orchestrator | 2025-10-08 15:33:45.397278 | orchestrator | + set -e 2025-10-08 15:33:45.397296 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-10-08 15:33:45.397316 | orchestrator | ++ export INTERACTIVE=false 2025-10-08 15:33:45.397331 | orchestrator | ++ INTERACTIVE=false 2025-10-08 15:33:45.397375 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-10-08 15:33:45.397397 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-10-08 15:33:45.397411 | orchestrator | + source /opt/manager-vars.sh 2025-10-08 15:33:45.397423 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-10-08 15:33:45.397441 | orchestrator | ++ NUMBER_OF_NODES=6 2025-10-08 15:33:45.397453 | orchestrator | ++ export CEPH_VERSION=reef 2025-10-08 15:33:45.397471 | orchestrator | ++ CEPH_VERSION=reef 2025-10-08 15:33:45.397482 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-10-08 15:33:45.397500 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-10-08 15:33:45.397511 | orchestrator | ++ export MANAGER_VERSION=latest 2025-10-08 15:33:45.397525 | orchestrator | ++ MANAGER_VERSION=latest 2025-10-08 15:33:45.397537 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-10-08 15:33:45.397549 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-10-08 15:33:45.397560 | orchestrator | ++ export ARA=false 2025-10-08 15:33:45.397571 | orchestrator | ++ ARA=false 2025-10-08 15:33:45.397582 | orchestrator | ++ export DEPLOY_MODE=manager 2025-10-08 15:33:45.397593 | orchestrator | ++ DEPLOY_MODE=manager 2025-10-08 15:33:45.397604 | orchestrator | ++ export TEMPEST=false 2025-10-08 15:33:45.397615 | orchestrator | ++ TEMPEST=false 2025-10-08 15:33:45.397626 | orchestrator | ++ export IS_ZUUL=true 2025-10-08 15:33:45.397637 | orchestrator | ++ IS_ZUUL=true 2025-10-08 15:33:45.397648 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.175 2025-10-08 15:33:45.397659 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.175 2025-10-08 15:33:45.397670 | orchestrator | ++ export EXTERNAL_API=false 2025-10-08 15:33:45.397681 | orchestrator | ++ EXTERNAL_API=false 2025-10-08 15:33:45.397692 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-10-08 15:33:45.397703 | orchestrator | ++ IMAGE_USER=ubuntu 2025-10-08 15:33:45.397714 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-10-08 15:33:45.397725 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-10-08 15:33:45.397736 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-10-08 15:33:45.397747 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-10-08 15:33:45.397758 | orchestrator | + echo 2025-10-08 15:33:45.397776 | orchestrator | + echo '# PULL IMAGES' 2025-10-08 15:33:45.397787 | orchestrator | + echo 2025-10-08 15:33:45.398150 | orchestrator | ++ semver latest 7.0.0 2025-10-08 15:33:45.458140 | orchestrator | + [[ -1 -ge 0 ]] 2025-10-08 15:33:45.458237 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-10-08 15:33:45.458259 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2025-10-08 15:33:47.383496 | orchestrator | 2025-10-08 15:33:47 | INFO  | Trying to run play pull-images in environment custom 2025-10-08 15:33:57.474378 | orchestrator | 2025-10-08 15:33:57 | INFO  | Task d107acec-bb3d-4ff4-b092-0a2463639731 (pull-images) was prepared for execution. 2025-10-08 15:33:57.474502 | orchestrator | 2025-10-08 15:33:57 | INFO  | Task d107acec-bb3d-4ff4-b092-0a2463639731 is running in background. No more output. Check ARA for logs. 2025-10-08 15:33:59.815905 | orchestrator | 2025-10-08 15:33:59 | INFO  | Trying to run play wipe-partitions in environment custom 2025-10-08 15:34:09.929238 | orchestrator | 2025-10-08 15:34:09 | INFO  | Task e21a2f06-83d3-4664-87e7-7d1a850da73a (wipe-partitions) was prepared for execution. 2025-10-08 15:34:09.929341 | orchestrator | 2025-10-08 15:34:09 | INFO  | It takes a moment until task e21a2f06-83d3-4664-87e7-7d1a850da73a (wipe-partitions) has been started and output is visible here. 2025-10-08 15:34:22.570364 | orchestrator | 2025-10-08 15:34:22.570471 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2025-10-08 15:34:22.570487 | orchestrator | 2025-10-08 15:34:22.570499 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2025-10-08 15:34:22.570515 | orchestrator | Wednesday 08 October 2025 15:34:14 +0000 (0:00:00.140) 0:00:00.140 ***** 2025-10-08 15:34:22.570527 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:34:22.570540 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:34:22.570552 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:34:22.570564 | orchestrator | 2025-10-08 15:34:22.570575 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2025-10-08 15:34:22.570611 | orchestrator | Wednesday 08 October 2025 15:34:14 +0000 (0:00:00.650) 0:00:00.791 ***** 2025-10-08 15:34:22.570623 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:34:22.570635 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:34:22.570652 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:34:22.570663 | orchestrator | 2025-10-08 15:34:22.570675 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2025-10-08 15:34:22.570686 | orchestrator | Wednesday 08 October 2025 15:34:15 +0000 (0:00:00.449) 0:00:01.240 ***** 2025-10-08 15:34:22.570697 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:34:22.570709 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:34:22.570720 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:34:22.570731 | orchestrator | 2025-10-08 15:34:22.570742 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2025-10-08 15:34:22.570753 | orchestrator | Wednesday 08 October 2025 15:34:15 +0000 (0:00:00.645) 0:00:01.885 ***** 2025-10-08 15:34:22.570765 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:34:22.570775 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:34:22.570786 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:34:22.570797 | orchestrator | 2025-10-08 15:34:22.570808 | orchestrator | TASK [Check device availability] *********************************************** 2025-10-08 15:34:22.570819 | orchestrator | Wednesday 08 October 2025 15:34:16 +0000 (0:00:00.317) 0:00:02.203 ***** 2025-10-08 15:34:22.570830 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-10-08 15:34:22.570845 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-10-08 15:34:22.570856 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-10-08 15:34:22.570867 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-10-08 15:34:22.570879 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-10-08 15:34:22.570892 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-10-08 15:34:22.570904 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-10-08 15:34:22.570916 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-10-08 15:34:22.570928 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-10-08 15:34:22.570940 | orchestrator | 2025-10-08 15:34:22.570953 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2025-10-08 15:34:22.570965 | orchestrator | Wednesday 08 October 2025 15:34:17 +0000 (0:00:01.188) 0:00:03.391 ***** 2025-10-08 15:34:22.570976 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2025-10-08 15:34:22.570987 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2025-10-08 15:34:22.570998 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2025-10-08 15:34:22.571009 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2025-10-08 15:34:22.571019 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2025-10-08 15:34:22.571030 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2025-10-08 15:34:22.571041 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2025-10-08 15:34:22.571051 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2025-10-08 15:34:22.571062 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2025-10-08 15:34:22.571100 | orchestrator | 2025-10-08 15:34:22.571112 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2025-10-08 15:34:22.571123 | orchestrator | Wednesday 08 October 2025 15:34:18 +0000 (0:00:01.567) 0:00:04.959 ***** 2025-10-08 15:34:22.571134 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-10-08 15:34:22.571145 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-10-08 15:34:22.571155 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-10-08 15:34:22.571166 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-10-08 15:34:22.571177 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-10-08 15:34:22.571188 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-10-08 15:34:22.571198 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-10-08 15:34:22.571218 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-10-08 15:34:22.571235 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-10-08 15:34:22.571246 | orchestrator | 2025-10-08 15:34:22.571257 | orchestrator | TASK [Reload udev rules] ******************************************************* 2025-10-08 15:34:22.571268 | orchestrator | Wednesday 08 October 2025 15:34:21 +0000 (0:00:02.046) 0:00:07.006 ***** 2025-10-08 15:34:22.571279 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:34:22.571290 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:34:22.571301 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:34:22.571312 | orchestrator | 2025-10-08 15:34:22.571323 | orchestrator | TASK [Request device events from the kernel] *********************************** 2025-10-08 15:34:22.571334 | orchestrator | Wednesday 08 October 2025 15:34:21 +0000 (0:00:00.634) 0:00:07.641 ***** 2025-10-08 15:34:22.571345 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:34:22.571356 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:34:22.571367 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:34:22.571378 | orchestrator | 2025-10-08 15:34:22.571389 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-08 15:34:22.571402 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-08 15:34:22.571415 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-08 15:34:22.571443 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-08 15:34:22.571455 | orchestrator | 2025-10-08 15:34:22.571467 | orchestrator | 2025-10-08 15:34:22.571478 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-08 15:34:22.571489 | orchestrator | Wednesday 08 October 2025 15:34:22 +0000 (0:00:00.664) 0:00:08.305 ***** 2025-10-08 15:34:22.571500 | orchestrator | =============================================================================== 2025-10-08 15:34:22.571510 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.05s 2025-10-08 15:34:22.571521 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.57s 2025-10-08 15:34:22.571532 | orchestrator | Check device availability ----------------------------------------------- 1.19s 2025-10-08 15:34:22.571543 | orchestrator | Request device events from the kernel ----------------------------------- 0.66s 2025-10-08 15:34:22.571554 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.65s 2025-10-08 15:34:22.571565 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.65s 2025-10-08 15:34:22.571576 | orchestrator | Reload udev rules ------------------------------------------------------- 0.63s 2025-10-08 15:34:22.571586 | orchestrator | Remove all rook related logical devices --------------------------------- 0.45s 2025-10-08 15:34:22.571598 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.32s 2025-10-08 15:34:34.809850 | orchestrator | 2025-10-08 15:34:34 | INFO  | Task 47a50fb6-9155-49c1-aa93-5457697533de (facts) was prepared for execution. 2025-10-08 15:34:34.809962 | orchestrator | 2025-10-08 15:34:34 | INFO  | It takes a moment until task 47a50fb6-9155-49c1-aa93-5457697533de (facts) has been started and output is visible here. 2025-10-08 15:34:47.119052 | orchestrator | 2025-10-08 15:34:47.119193 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-10-08 15:34:47.119211 | orchestrator | 2025-10-08 15:34:47.119224 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-10-08 15:34:47.119237 | orchestrator | Wednesday 08 October 2025 15:34:39 +0000 (0:00:00.265) 0:00:00.265 ***** 2025-10-08 15:34:47.119248 | orchestrator | ok: [testbed-manager] 2025-10-08 15:34:47.119261 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:34:47.119271 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:34:47.119309 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:34:47.119320 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:34:47.119331 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:34:47.119342 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:34:47.119353 | orchestrator | 2025-10-08 15:34:47.119364 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-10-08 15:34:47.119374 | orchestrator | Wednesday 08 October 2025 15:34:40 +0000 (0:00:01.100) 0:00:01.366 ***** 2025-10-08 15:34:47.119385 | orchestrator | skipping: [testbed-manager] 2025-10-08 15:34:47.119397 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:34:47.119408 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:34:47.119419 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:34:47.119429 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:34:47.119440 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:34:47.119451 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:34:47.119462 | orchestrator | 2025-10-08 15:34:47.119473 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-10-08 15:34:47.119484 | orchestrator | 2025-10-08 15:34:47.119511 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-10-08 15:34:47.119523 | orchestrator | Wednesday 08 October 2025 15:34:41 +0000 (0:00:01.131) 0:00:02.498 ***** 2025-10-08 15:34:47.119533 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:34:47.119544 | orchestrator | ok: [testbed-manager] 2025-10-08 15:34:47.119556 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:34:47.119567 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:34:47.119578 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:34:47.119589 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:34:47.119601 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:34:47.119613 | orchestrator | 2025-10-08 15:34:47.119626 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-10-08 15:34:47.119638 | orchestrator | 2025-10-08 15:34:47.119650 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-10-08 15:34:47.119662 | orchestrator | Wednesday 08 October 2025 15:34:46 +0000 (0:00:04.784) 0:00:07.282 ***** 2025-10-08 15:34:47.119674 | orchestrator | skipping: [testbed-manager] 2025-10-08 15:34:47.119687 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:34:47.119699 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:34:47.119711 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:34:47.119722 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:34:47.119734 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:34:47.119746 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:34:47.119757 | orchestrator | 2025-10-08 15:34:47.119768 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-08 15:34:47.119779 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-08 15:34:47.119792 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-08 15:34:47.119803 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-08 15:34:47.119814 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-08 15:34:47.119825 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-08 15:34:47.119836 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-08 15:34:47.119846 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-08 15:34:47.119857 | orchestrator | 2025-10-08 15:34:47.119879 | orchestrator | 2025-10-08 15:34:47.119890 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-08 15:34:47.119901 | orchestrator | Wednesday 08 October 2025 15:34:46 +0000 (0:00:00.571) 0:00:07.853 ***** 2025-10-08 15:34:47.119912 | orchestrator | =============================================================================== 2025-10-08 15:34:47.119923 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.78s 2025-10-08 15:34:47.119934 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.13s 2025-10-08 15:34:47.119945 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.10s 2025-10-08 15:34:47.119956 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.57s 2025-10-08 15:34:49.484805 | orchestrator | 2025-10-08 15:34:49 | INFO  | Task bc796040-4abd-4d10-a5af-7c9315af393b (ceph-configure-lvm-volumes) was prepared for execution. 2025-10-08 15:34:49.484909 | orchestrator | 2025-10-08 15:34:49 | INFO  | It takes a moment until task bc796040-4abd-4d10-a5af-7c9315af393b (ceph-configure-lvm-volumes) has been started and output is visible here. 2025-10-08 15:35:01.265424 | orchestrator | 2025-10-08 15:35:01.265537 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-10-08 15:35:01.265555 | orchestrator | 2025-10-08 15:35:01.265567 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-10-08 15:35:01.265580 | orchestrator | Wednesday 08 October 2025 15:34:53 +0000 (0:00:00.341) 0:00:00.341 ***** 2025-10-08 15:35:01.265591 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-10-08 15:35:01.265603 | orchestrator | 2025-10-08 15:35:01.265614 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-10-08 15:35:01.265626 | orchestrator | Wednesday 08 October 2025 15:34:54 +0000 (0:00:00.235) 0:00:00.577 ***** 2025-10-08 15:35:01.265637 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:35:01.265649 | orchestrator | 2025-10-08 15:35:01.265660 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-08 15:35:01.265671 | orchestrator | Wednesday 08 October 2025 15:34:54 +0000 (0:00:00.212) 0:00:00.789 ***** 2025-10-08 15:35:01.265683 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-10-08 15:35:01.265694 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-10-08 15:35:01.265706 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-10-08 15:35:01.265728 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-10-08 15:35:01.265740 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-10-08 15:35:01.265751 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-10-08 15:35:01.265762 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-10-08 15:35:01.265773 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-10-08 15:35:01.265784 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-10-08 15:35:01.265794 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-10-08 15:35:01.265805 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-10-08 15:35:01.265816 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-10-08 15:35:01.265827 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-10-08 15:35:01.265838 | orchestrator | 2025-10-08 15:35:01.265849 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-08 15:35:01.265860 | orchestrator | Wednesday 08 October 2025 15:34:54 +0000 (0:00:00.416) 0:00:01.205 ***** 2025-10-08 15:35:01.265871 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:35:01.265904 | orchestrator | 2025-10-08 15:35:01.265916 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-08 15:35:01.265927 | orchestrator | Wednesday 08 October 2025 15:34:54 +0000 (0:00:00.185) 0:00:01.391 ***** 2025-10-08 15:35:01.265938 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:35:01.265950 | orchestrator | 2025-10-08 15:35:01.265962 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-08 15:35:01.265974 | orchestrator | Wednesday 08 October 2025 15:34:55 +0000 (0:00:00.193) 0:00:01.584 ***** 2025-10-08 15:35:01.265986 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:35:01.265998 | orchestrator | 2025-10-08 15:35:01.266010 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-08 15:35:01.266110 | orchestrator | Wednesday 08 October 2025 15:34:55 +0000 (0:00:00.198) 0:00:01.783 ***** 2025-10-08 15:35:01.266124 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:35:01.266140 | orchestrator | 2025-10-08 15:35:01.266154 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-08 15:35:01.266166 | orchestrator | Wednesday 08 October 2025 15:34:55 +0000 (0:00:00.184) 0:00:01.968 ***** 2025-10-08 15:35:01.266178 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:35:01.266190 | orchestrator | 2025-10-08 15:35:01.266203 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-08 15:35:01.266216 | orchestrator | Wednesday 08 October 2025 15:34:55 +0000 (0:00:00.181) 0:00:02.150 ***** 2025-10-08 15:35:01.266228 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:35:01.266240 | orchestrator | 2025-10-08 15:35:01.266253 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-08 15:35:01.266265 | orchestrator | Wednesday 08 October 2025 15:34:55 +0000 (0:00:00.184) 0:00:02.335 ***** 2025-10-08 15:35:01.266278 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:35:01.266290 | orchestrator | 2025-10-08 15:35:01.266302 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-08 15:35:01.266313 | orchestrator | Wednesday 08 October 2025 15:34:56 +0000 (0:00:00.207) 0:00:02.542 ***** 2025-10-08 15:35:01.266324 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:35:01.266335 | orchestrator | 2025-10-08 15:35:01.266346 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-08 15:35:01.266357 | orchestrator | Wednesday 08 October 2025 15:34:56 +0000 (0:00:00.205) 0:00:02.747 ***** 2025-10-08 15:35:01.266368 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_d5106b6d-a2b1-46e7-8a70-2ffa10fa4fd8) 2025-10-08 15:35:01.266381 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_d5106b6d-a2b1-46e7-8a70-2ffa10fa4fd8) 2025-10-08 15:35:01.266392 | orchestrator | 2025-10-08 15:35:01.266403 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-08 15:35:01.266414 | orchestrator | Wednesday 08 October 2025 15:34:56 +0000 (0:00:00.402) 0:00:03.150 ***** 2025-10-08 15:35:01.266445 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_e7b164cd-18c4-4443-b153-66ef822cc182) 2025-10-08 15:35:01.266457 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_e7b164cd-18c4-4443-b153-66ef822cc182) 2025-10-08 15:35:01.266468 | orchestrator | 2025-10-08 15:35:01.266480 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-08 15:35:01.266491 | orchestrator | Wednesday 08 October 2025 15:34:57 +0000 (0:00:00.620) 0:00:03.771 ***** 2025-10-08 15:35:01.266508 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_501046cd-0181-4267-8e39-455d7db25dff) 2025-10-08 15:35:01.266520 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_501046cd-0181-4267-8e39-455d7db25dff) 2025-10-08 15:35:01.266531 | orchestrator | 2025-10-08 15:35:01.266542 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-08 15:35:01.266553 | orchestrator | Wednesday 08 October 2025 15:34:57 +0000 (0:00:00.687) 0:00:04.459 ***** 2025-10-08 15:35:01.266564 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_b2b0a7c4-684b-467f-bea3-a2180df0d298) 2025-10-08 15:35:01.266585 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_b2b0a7c4-684b-467f-bea3-a2180df0d298) 2025-10-08 15:35:01.266596 | orchestrator | 2025-10-08 15:35:01.266607 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-08 15:35:01.266618 | orchestrator | Wednesday 08 October 2025 15:34:58 +0000 (0:00:00.910) 0:00:05.369 ***** 2025-10-08 15:35:01.266629 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-10-08 15:35:01.266640 | orchestrator | 2025-10-08 15:35:01.266651 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-08 15:35:01.266662 | orchestrator | Wednesday 08 October 2025 15:34:59 +0000 (0:00:00.330) 0:00:05.699 ***** 2025-10-08 15:35:01.266673 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-10-08 15:35:01.266684 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-10-08 15:35:01.266695 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-10-08 15:35:01.266705 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-10-08 15:35:01.266716 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-10-08 15:35:01.266727 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-10-08 15:35:01.266738 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-10-08 15:35:01.266749 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-10-08 15:35:01.266759 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-10-08 15:35:01.266770 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-10-08 15:35:01.266781 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-10-08 15:35:01.266792 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-10-08 15:35:01.266803 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-10-08 15:35:01.266814 | orchestrator | 2025-10-08 15:35:01.266825 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-08 15:35:01.266836 | orchestrator | Wednesday 08 October 2025 15:34:59 +0000 (0:00:00.394) 0:00:06.093 ***** 2025-10-08 15:35:01.266847 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:35:01.266858 | orchestrator | 2025-10-08 15:35:01.266868 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-08 15:35:01.266879 | orchestrator | Wednesday 08 October 2025 15:34:59 +0000 (0:00:00.197) 0:00:06.291 ***** 2025-10-08 15:35:01.266890 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:35:01.266901 | orchestrator | 2025-10-08 15:35:01.266912 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-08 15:35:01.266923 | orchestrator | Wednesday 08 October 2025 15:35:00 +0000 (0:00:00.197) 0:00:06.488 ***** 2025-10-08 15:35:01.266934 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:35:01.266945 | orchestrator | 2025-10-08 15:35:01.266956 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-08 15:35:01.266966 | orchestrator | Wednesday 08 October 2025 15:35:00 +0000 (0:00:00.195) 0:00:06.684 ***** 2025-10-08 15:35:01.266977 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:35:01.266989 | orchestrator | 2025-10-08 15:35:01.266999 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-08 15:35:01.267010 | orchestrator | Wednesday 08 October 2025 15:35:00 +0000 (0:00:00.196) 0:00:06.881 ***** 2025-10-08 15:35:01.267021 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:35:01.267032 | orchestrator | 2025-10-08 15:35:01.267049 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-08 15:35:01.267061 | orchestrator | Wednesday 08 October 2025 15:35:00 +0000 (0:00:00.223) 0:00:07.104 ***** 2025-10-08 15:35:01.267090 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:35:01.267101 | orchestrator | 2025-10-08 15:35:01.267112 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-08 15:35:01.267123 | orchestrator | Wednesday 08 October 2025 15:35:00 +0000 (0:00:00.200) 0:00:07.305 ***** 2025-10-08 15:35:01.267134 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:35:01.267145 | orchestrator | 2025-10-08 15:35:01.267156 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-08 15:35:01.267167 | orchestrator | Wednesday 08 October 2025 15:35:01 +0000 (0:00:00.200) 0:00:07.505 ***** 2025-10-08 15:35:01.267185 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:35:07.943945 | orchestrator | 2025-10-08 15:35:07.944058 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-08 15:35:07.944118 | orchestrator | Wednesday 08 October 2025 15:35:01 +0000 (0:00:00.223) 0:00:07.728 ***** 2025-10-08 15:35:07.944132 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-10-08 15:35:07.944146 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-10-08 15:35:07.944157 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-10-08 15:35:07.944168 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-10-08 15:35:07.944179 | orchestrator | 2025-10-08 15:35:07.944191 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-08 15:35:07.944203 | orchestrator | Wednesday 08 October 2025 15:35:02 +0000 (0:00:00.904) 0:00:08.633 ***** 2025-10-08 15:35:07.944234 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:35:07.944246 | orchestrator | 2025-10-08 15:35:07.944257 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-08 15:35:07.944268 | orchestrator | Wednesday 08 October 2025 15:35:02 +0000 (0:00:00.179) 0:00:08.812 ***** 2025-10-08 15:35:07.944279 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:35:07.944290 | orchestrator | 2025-10-08 15:35:07.944301 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-08 15:35:07.944312 | orchestrator | Wednesday 08 October 2025 15:35:02 +0000 (0:00:00.231) 0:00:09.044 ***** 2025-10-08 15:35:07.944323 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:35:07.944334 | orchestrator | 2025-10-08 15:35:07.944345 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-08 15:35:07.944356 | orchestrator | Wednesday 08 October 2025 15:35:02 +0000 (0:00:00.179) 0:00:09.223 ***** 2025-10-08 15:35:07.944367 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:35:07.944378 | orchestrator | 2025-10-08 15:35:07.944389 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-10-08 15:35:07.944400 | orchestrator | Wednesday 08 October 2025 15:35:02 +0000 (0:00:00.199) 0:00:09.423 ***** 2025-10-08 15:35:07.944411 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2025-10-08 15:35:07.944422 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2025-10-08 15:35:07.944434 | orchestrator | 2025-10-08 15:35:07.944445 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-10-08 15:35:07.944456 | orchestrator | Wednesday 08 October 2025 15:35:03 +0000 (0:00:00.168) 0:00:09.591 ***** 2025-10-08 15:35:07.944467 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:35:07.944480 | orchestrator | 2025-10-08 15:35:07.944492 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-10-08 15:35:07.944505 | orchestrator | Wednesday 08 October 2025 15:35:03 +0000 (0:00:00.114) 0:00:09.706 ***** 2025-10-08 15:35:07.944517 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:35:07.944529 | orchestrator | 2025-10-08 15:35:07.944541 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-10-08 15:35:07.944553 | orchestrator | Wednesday 08 October 2025 15:35:03 +0000 (0:00:00.126) 0:00:09.832 ***** 2025-10-08 15:35:07.944566 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:35:07.944601 | orchestrator | 2025-10-08 15:35:07.944614 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-10-08 15:35:07.944626 | orchestrator | Wednesday 08 October 2025 15:35:03 +0000 (0:00:00.129) 0:00:09.961 ***** 2025-10-08 15:35:07.944638 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:35:07.944651 | orchestrator | 2025-10-08 15:35:07.944663 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-10-08 15:35:07.944676 | orchestrator | Wednesday 08 October 2025 15:35:03 +0000 (0:00:00.139) 0:00:10.100 ***** 2025-10-08 15:35:07.944689 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '25f30e7b-7b9e-5d46-b3fc-d4cb59f24626'}}) 2025-10-08 15:35:07.944702 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ff85ad2a-1d5d-50f9-b3a7-2f1eee54f485'}}) 2025-10-08 15:35:07.944714 | orchestrator | 2025-10-08 15:35:07.944726 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-10-08 15:35:07.944739 | orchestrator | Wednesday 08 October 2025 15:35:03 +0000 (0:00:00.155) 0:00:10.256 ***** 2025-10-08 15:35:07.944752 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '25f30e7b-7b9e-5d46-b3fc-d4cb59f24626'}})  2025-10-08 15:35:07.944773 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ff85ad2a-1d5d-50f9-b3a7-2f1eee54f485'}})  2025-10-08 15:35:07.944786 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:35:07.944798 | orchestrator | 2025-10-08 15:35:07.944811 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-10-08 15:35:07.944823 | orchestrator | Wednesday 08 October 2025 15:35:03 +0000 (0:00:00.140) 0:00:10.397 ***** 2025-10-08 15:35:07.944834 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '25f30e7b-7b9e-5d46-b3fc-d4cb59f24626'}})  2025-10-08 15:35:07.944845 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ff85ad2a-1d5d-50f9-b3a7-2f1eee54f485'}})  2025-10-08 15:35:07.944856 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:35:07.944867 | orchestrator | 2025-10-08 15:35:07.944878 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-10-08 15:35:07.944889 | orchestrator | Wednesday 08 October 2025 15:35:04 +0000 (0:00:00.275) 0:00:10.672 ***** 2025-10-08 15:35:07.944900 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '25f30e7b-7b9e-5d46-b3fc-d4cb59f24626'}})  2025-10-08 15:35:07.944911 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ff85ad2a-1d5d-50f9-b3a7-2f1eee54f485'}})  2025-10-08 15:35:07.944922 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:35:07.944933 | orchestrator | 2025-10-08 15:35:07.944964 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-10-08 15:35:07.944976 | orchestrator | Wednesday 08 October 2025 15:35:04 +0000 (0:00:00.142) 0:00:10.815 ***** 2025-10-08 15:35:07.944987 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:35:07.944998 | orchestrator | 2025-10-08 15:35:07.945009 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-10-08 15:35:07.945020 | orchestrator | Wednesday 08 October 2025 15:35:04 +0000 (0:00:00.134) 0:00:10.950 ***** 2025-10-08 15:35:07.945031 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:35:07.945042 | orchestrator | 2025-10-08 15:35:07.945053 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-10-08 15:35:07.945064 | orchestrator | Wednesday 08 October 2025 15:35:04 +0000 (0:00:00.137) 0:00:11.087 ***** 2025-10-08 15:35:07.945107 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:35:07.945118 | orchestrator | 2025-10-08 15:35:07.945129 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-10-08 15:35:07.945140 | orchestrator | Wednesday 08 October 2025 15:35:04 +0000 (0:00:00.126) 0:00:11.214 ***** 2025-10-08 15:35:07.945151 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:35:07.945162 | orchestrator | 2025-10-08 15:35:07.945182 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-10-08 15:35:07.945193 | orchestrator | Wednesday 08 October 2025 15:35:04 +0000 (0:00:00.133) 0:00:11.347 ***** 2025-10-08 15:35:07.945204 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:35:07.945215 | orchestrator | 2025-10-08 15:35:07.945226 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-10-08 15:35:07.945237 | orchestrator | Wednesday 08 October 2025 15:35:05 +0000 (0:00:00.136) 0:00:11.483 ***** 2025-10-08 15:35:07.945248 | orchestrator | ok: [testbed-node-3] => { 2025-10-08 15:35:07.945259 | orchestrator |  "ceph_osd_devices": { 2025-10-08 15:35:07.945271 | orchestrator |  "sdb": { 2025-10-08 15:35:07.945282 | orchestrator |  "osd_lvm_uuid": "25f30e7b-7b9e-5d46-b3fc-d4cb59f24626" 2025-10-08 15:35:07.945293 | orchestrator |  }, 2025-10-08 15:35:07.945304 | orchestrator |  "sdc": { 2025-10-08 15:35:07.945315 | orchestrator |  "osd_lvm_uuid": "ff85ad2a-1d5d-50f9-b3a7-2f1eee54f485" 2025-10-08 15:35:07.945326 | orchestrator |  } 2025-10-08 15:35:07.945337 | orchestrator |  } 2025-10-08 15:35:07.945348 | orchestrator | } 2025-10-08 15:35:07.945360 | orchestrator | 2025-10-08 15:35:07.945371 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-10-08 15:35:07.945381 | orchestrator | Wednesday 08 October 2025 15:35:05 +0000 (0:00:00.136) 0:00:11.620 ***** 2025-10-08 15:35:07.945392 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:35:07.945403 | orchestrator | 2025-10-08 15:35:07.945414 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-10-08 15:35:07.945425 | orchestrator | Wednesday 08 October 2025 15:35:05 +0000 (0:00:00.126) 0:00:11.746 ***** 2025-10-08 15:35:07.945441 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:35:07.945453 | orchestrator | 2025-10-08 15:35:07.945464 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-10-08 15:35:07.945475 | orchestrator | Wednesday 08 October 2025 15:35:05 +0000 (0:00:00.144) 0:00:11.891 ***** 2025-10-08 15:35:07.945486 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:35:07.945496 | orchestrator | 2025-10-08 15:35:07.945507 | orchestrator | TASK [Print configuration data] ************************************************ 2025-10-08 15:35:07.945518 | orchestrator | Wednesday 08 October 2025 15:35:05 +0000 (0:00:00.126) 0:00:12.017 ***** 2025-10-08 15:35:07.945529 | orchestrator | changed: [testbed-node-3] => { 2025-10-08 15:35:07.945540 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-10-08 15:35:07.945552 | orchestrator |  "ceph_osd_devices": { 2025-10-08 15:35:07.945563 | orchestrator |  "sdb": { 2025-10-08 15:35:07.945574 | orchestrator |  "osd_lvm_uuid": "25f30e7b-7b9e-5d46-b3fc-d4cb59f24626" 2025-10-08 15:35:07.945585 | orchestrator |  }, 2025-10-08 15:35:07.945596 | orchestrator |  "sdc": { 2025-10-08 15:35:07.945607 | orchestrator |  "osd_lvm_uuid": "ff85ad2a-1d5d-50f9-b3a7-2f1eee54f485" 2025-10-08 15:35:07.945618 | orchestrator |  } 2025-10-08 15:35:07.945629 | orchestrator |  }, 2025-10-08 15:35:07.945640 | orchestrator |  "lvm_volumes": [ 2025-10-08 15:35:07.945651 | orchestrator |  { 2025-10-08 15:35:07.945662 | orchestrator |  "data": "osd-block-25f30e7b-7b9e-5d46-b3fc-d4cb59f24626", 2025-10-08 15:35:07.945673 | orchestrator |  "data_vg": "ceph-25f30e7b-7b9e-5d46-b3fc-d4cb59f24626" 2025-10-08 15:35:07.945684 | orchestrator |  }, 2025-10-08 15:35:07.945695 | orchestrator |  { 2025-10-08 15:35:07.945706 | orchestrator |  "data": "osd-block-ff85ad2a-1d5d-50f9-b3a7-2f1eee54f485", 2025-10-08 15:35:07.945717 | orchestrator |  "data_vg": "ceph-ff85ad2a-1d5d-50f9-b3a7-2f1eee54f485" 2025-10-08 15:35:07.945728 | orchestrator |  } 2025-10-08 15:35:07.945739 | orchestrator |  ] 2025-10-08 15:35:07.945749 | orchestrator |  } 2025-10-08 15:35:07.945760 | orchestrator | } 2025-10-08 15:35:07.945771 | orchestrator | 2025-10-08 15:35:07.945782 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-10-08 15:35:07.945801 | orchestrator | Wednesday 08 October 2025 15:35:05 +0000 (0:00:00.311) 0:00:12.329 ***** 2025-10-08 15:35:07.945811 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-10-08 15:35:07.945822 | orchestrator | 2025-10-08 15:35:07.945833 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-10-08 15:35:07.945844 | orchestrator | 2025-10-08 15:35:07.945855 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-10-08 15:35:07.945866 | orchestrator | Wednesday 08 October 2025 15:35:07 +0000 (0:00:01.639) 0:00:13.968 ***** 2025-10-08 15:35:07.945877 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-10-08 15:35:07.945888 | orchestrator | 2025-10-08 15:35:07.945899 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-10-08 15:35:07.945910 | orchestrator | Wednesday 08 October 2025 15:35:07 +0000 (0:00:00.226) 0:00:14.195 ***** 2025-10-08 15:35:07.945921 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:35:07.945932 | orchestrator | 2025-10-08 15:35:07.945943 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-08 15:35:07.945961 | orchestrator | Wednesday 08 October 2025 15:35:07 +0000 (0:00:00.218) 0:00:14.414 ***** 2025-10-08 15:35:16.230309 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-10-08 15:35:16.230415 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-10-08 15:35:16.230431 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-10-08 15:35:16.230442 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-10-08 15:35:16.230453 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-10-08 15:35:16.230464 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-10-08 15:35:16.230475 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-10-08 15:35:16.230486 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-10-08 15:35:16.230497 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-10-08 15:35:16.230508 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-10-08 15:35:16.230537 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-10-08 15:35:16.230548 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-10-08 15:35:16.230559 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-10-08 15:35:16.230575 | orchestrator | 2025-10-08 15:35:16.230587 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-08 15:35:16.230600 | orchestrator | Wednesday 08 October 2025 15:35:08 +0000 (0:00:00.319) 0:00:14.734 ***** 2025-10-08 15:35:16.230611 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:35:16.230623 | orchestrator | 2025-10-08 15:35:16.230634 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-08 15:35:16.230645 | orchestrator | Wednesday 08 October 2025 15:35:08 +0000 (0:00:00.205) 0:00:14.940 ***** 2025-10-08 15:35:16.230656 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:35:16.230666 | orchestrator | 2025-10-08 15:35:16.230677 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-08 15:35:16.230688 | orchestrator | Wednesday 08 October 2025 15:35:08 +0000 (0:00:00.180) 0:00:15.121 ***** 2025-10-08 15:35:16.230699 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:35:16.230709 | orchestrator | 2025-10-08 15:35:16.230720 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-08 15:35:16.230731 | orchestrator | Wednesday 08 October 2025 15:35:08 +0000 (0:00:00.160) 0:00:15.281 ***** 2025-10-08 15:35:16.230742 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:35:16.230774 | orchestrator | 2025-10-08 15:35:16.230786 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-08 15:35:16.230797 | orchestrator | Wednesday 08 October 2025 15:35:08 +0000 (0:00:00.176) 0:00:15.458 ***** 2025-10-08 15:35:16.230808 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:35:16.230819 | orchestrator | 2025-10-08 15:35:16.230829 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-08 15:35:16.230840 | orchestrator | Wednesday 08 October 2025 15:35:09 +0000 (0:00:00.443) 0:00:15.902 ***** 2025-10-08 15:35:16.230851 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:35:16.230862 | orchestrator | 2025-10-08 15:35:16.230872 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-08 15:35:16.230883 | orchestrator | Wednesday 08 October 2025 15:35:09 +0000 (0:00:00.186) 0:00:16.089 ***** 2025-10-08 15:35:16.230894 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:35:16.230905 | orchestrator | 2025-10-08 15:35:16.230916 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-08 15:35:16.230927 | orchestrator | Wednesday 08 October 2025 15:35:09 +0000 (0:00:00.174) 0:00:16.263 ***** 2025-10-08 15:35:16.230938 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:35:16.230948 | orchestrator | 2025-10-08 15:35:16.230959 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-08 15:35:16.230970 | orchestrator | Wednesday 08 October 2025 15:35:10 +0000 (0:00:00.217) 0:00:16.480 ***** 2025-10-08 15:35:16.230981 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_a79e8257-b31c-4b26-9d3c-62ccc1082da3) 2025-10-08 15:35:16.230994 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_a79e8257-b31c-4b26-9d3c-62ccc1082da3) 2025-10-08 15:35:16.231004 | orchestrator | 2025-10-08 15:35:16.231015 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-08 15:35:16.231026 | orchestrator | Wednesday 08 October 2025 15:35:10 +0000 (0:00:00.434) 0:00:16.915 ***** 2025-10-08 15:35:16.231037 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_8931d93f-304b-4b68-94eb-87cca6c6eade) 2025-10-08 15:35:16.231048 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_8931d93f-304b-4b68-94eb-87cca6c6eade) 2025-10-08 15:35:16.231059 | orchestrator | 2025-10-08 15:35:16.231070 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-08 15:35:16.231104 | orchestrator | Wednesday 08 October 2025 15:35:10 +0000 (0:00:00.438) 0:00:17.353 ***** 2025-10-08 15:35:16.231116 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_f279d016-8061-4f1a-b5de-972c25793956) 2025-10-08 15:35:16.231127 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_f279d016-8061-4f1a-b5de-972c25793956) 2025-10-08 15:35:16.231138 | orchestrator | 2025-10-08 15:35:16.231148 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-08 15:35:16.231159 | orchestrator | Wednesday 08 October 2025 15:35:11 +0000 (0:00:00.437) 0:00:17.791 ***** 2025-10-08 15:35:16.231187 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_5c2995ee-2a0f-4f5c-ac7c-066cefbff021) 2025-10-08 15:35:16.231199 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_5c2995ee-2a0f-4f5c-ac7c-066cefbff021) 2025-10-08 15:35:16.231210 | orchestrator | 2025-10-08 15:35:16.231221 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-08 15:35:16.231231 | orchestrator | Wednesday 08 October 2025 15:35:11 +0000 (0:00:00.457) 0:00:18.249 ***** 2025-10-08 15:35:16.231242 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-10-08 15:35:16.231253 | orchestrator | 2025-10-08 15:35:16.231264 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-08 15:35:16.231281 | orchestrator | Wednesday 08 October 2025 15:35:12 +0000 (0:00:00.346) 0:00:18.595 ***** 2025-10-08 15:35:16.231292 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-10-08 15:35:16.231311 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-10-08 15:35:16.231322 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-10-08 15:35:16.231333 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-10-08 15:35:16.231343 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-10-08 15:35:16.231354 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-10-08 15:35:16.231364 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-10-08 15:35:16.231375 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-10-08 15:35:16.231385 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-10-08 15:35:16.231396 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-10-08 15:35:16.231407 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-10-08 15:35:16.231417 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-10-08 15:35:16.231428 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-10-08 15:35:16.231439 | orchestrator | 2025-10-08 15:35:16.231449 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-08 15:35:16.231460 | orchestrator | Wednesday 08 October 2025 15:35:12 +0000 (0:00:00.395) 0:00:18.991 ***** 2025-10-08 15:35:16.231471 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:35:16.231482 | orchestrator | 2025-10-08 15:35:16.231492 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-08 15:35:16.231503 | orchestrator | Wednesday 08 October 2025 15:35:13 +0000 (0:00:00.744) 0:00:19.735 ***** 2025-10-08 15:35:16.231514 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:35:16.231524 | orchestrator | 2025-10-08 15:35:16.231535 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-08 15:35:16.231546 | orchestrator | Wednesday 08 October 2025 15:35:13 +0000 (0:00:00.268) 0:00:20.004 ***** 2025-10-08 15:35:16.231557 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:35:16.231567 | orchestrator | 2025-10-08 15:35:16.231578 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-08 15:35:16.231589 | orchestrator | Wednesday 08 October 2025 15:35:13 +0000 (0:00:00.354) 0:00:20.358 ***** 2025-10-08 15:35:16.231599 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:35:16.231610 | orchestrator | 2025-10-08 15:35:16.231621 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-08 15:35:16.231632 | orchestrator | Wednesday 08 October 2025 15:35:14 +0000 (0:00:00.217) 0:00:20.576 ***** 2025-10-08 15:35:16.231642 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:35:16.231653 | orchestrator | 2025-10-08 15:35:16.231664 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-08 15:35:16.231675 | orchestrator | Wednesday 08 October 2025 15:35:14 +0000 (0:00:00.232) 0:00:20.809 ***** 2025-10-08 15:35:16.231685 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:35:16.231696 | orchestrator | 2025-10-08 15:35:16.231707 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-08 15:35:16.231718 | orchestrator | Wednesday 08 October 2025 15:35:14 +0000 (0:00:00.334) 0:00:21.144 ***** 2025-10-08 15:35:16.231728 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:35:16.231739 | orchestrator | 2025-10-08 15:35:16.231750 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-08 15:35:16.231760 | orchestrator | Wednesday 08 October 2025 15:35:14 +0000 (0:00:00.192) 0:00:21.336 ***** 2025-10-08 15:35:16.231771 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:35:16.231782 | orchestrator | 2025-10-08 15:35:16.231792 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-08 15:35:16.231810 | orchestrator | Wednesday 08 October 2025 15:35:15 +0000 (0:00:00.251) 0:00:21.588 ***** 2025-10-08 15:35:16.231821 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-10-08 15:35:16.231833 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-10-08 15:35:16.231844 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-10-08 15:35:16.231855 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-10-08 15:35:16.231865 | orchestrator | 2025-10-08 15:35:16.231876 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-08 15:35:16.231887 | orchestrator | Wednesday 08 October 2025 15:35:15 +0000 (0:00:00.880) 0:00:22.468 ***** 2025-10-08 15:35:16.231898 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:35:16.231909 | orchestrator | 2025-10-08 15:35:16.231926 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-08 15:35:22.593204 | orchestrator | Wednesday 08 October 2025 15:35:16 +0000 (0:00:00.228) 0:00:22.696 ***** 2025-10-08 15:35:22.593301 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:35:22.593318 | orchestrator | 2025-10-08 15:35:22.593331 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-08 15:35:22.593342 | orchestrator | Wednesday 08 October 2025 15:35:16 +0000 (0:00:00.230) 0:00:22.927 ***** 2025-10-08 15:35:22.593353 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:35:22.593364 | orchestrator | 2025-10-08 15:35:22.593376 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-08 15:35:22.593387 | orchestrator | Wednesday 08 October 2025 15:35:16 +0000 (0:00:00.244) 0:00:23.171 ***** 2025-10-08 15:35:22.593398 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:35:22.593409 | orchestrator | 2025-10-08 15:35:22.593436 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-10-08 15:35:22.593448 | orchestrator | Wednesday 08 October 2025 15:35:17 +0000 (0:00:00.791) 0:00:23.963 ***** 2025-10-08 15:35:22.593459 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2025-10-08 15:35:22.593470 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2025-10-08 15:35:22.593481 | orchestrator | 2025-10-08 15:35:22.593492 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-10-08 15:35:22.593503 | orchestrator | Wednesday 08 October 2025 15:35:17 +0000 (0:00:00.193) 0:00:24.157 ***** 2025-10-08 15:35:22.593514 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:35:22.593526 | orchestrator | 2025-10-08 15:35:22.593537 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-10-08 15:35:22.593548 | orchestrator | Wednesday 08 October 2025 15:35:17 +0000 (0:00:00.169) 0:00:24.326 ***** 2025-10-08 15:35:22.593559 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:35:22.593571 | orchestrator | 2025-10-08 15:35:22.593582 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-10-08 15:35:22.593593 | orchestrator | Wednesday 08 October 2025 15:35:17 +0000 (0:00:00.145) 0:00:24.472 ***** 2025-10-08 15:35:22.593604 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:35:22.593614 | orchestrator | 2025-10-08 15:35:22.593626 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-10-08 15:35:22.593636 | orchestrator | Wednesday 08 October 2025 15:35:18 +0000 (0:00:00.148) 0:00:24.620 ***** 2025-10-08 15:35:22.593647 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:35:22.593659 | orchestrator | 2025-10-08 15:35:22.593670 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-10-08 15:35:22.593681 | orchestrator | Wednesday 08 October 2025 15:35:18 +0000 (0:00:00.148) 0:00:24.768 ***** 2025-10-08 15:35:22.593693 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '7ac75f6e-526f-52f0-b624-7532d6099aef'}}) 2025-10-08 15:35:22.593704 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'bafbc9f1-844e-58d3-a294-acb7fdea1516'}}) 2025-10-08 15:35:22.593716 | orchestrator | 2025-10-08 15:35:22.593728 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-10-08 15:35:22.593764 | orchestrator | Wednesday 08 October 2025 15:35:18 +0000 (0:00:00.189) 0:00:24.958 ***** 2025-10-08 15:35:22.593778 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '7ac75f6e-526f-52f0-b624-7532d6099aef'}})  2025-10-08 15:35:22.593793 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'bafbc9f1-844e-58d3-a294-acb7fdea1516'}})  2025-10-08 15:35:22.593805 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:35:22.593818 | orchestrator | 2025-10-08 15:35:22.593831 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-10-08 15:35:22.593843 | orchestrator | Wednesday 08 October 2025 15:35:18 +0000 (0:00:00.173) 0:00:25.132 ***** 2025-10-08 15:35:22.593856 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '7ac75f6e-526f-52f0-b624-7532d6099aef'}})  2025-10-08 15:35:22.593868 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'bafbc9f1-844e-58d3-a294-acb7fdea1516'}})  2025-10-08 15:35:22.593881 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:35:22.593893 | orchestrator | 2025-10-08 15:35:22.593906 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-10-08 15:35:22.593919 | orchestrator | Wednesday 08 October 2025 15:35:18 +0000 (0:00:00.180) 0:00:25.313 ***** 2025-10-08 15:35:22.593931 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '7ac75f6e-526f-52f0-b624-7532d6099aef'}})  2025-10-08 15:35:22.593944 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'bafbc9f1-844e-58d3-a294-acb7fdea1516'}})  2025-10-08 15:35:22.593957 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:35:22.593970 | orchestrator | 2025-10-08 15:35:22.593983 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-10-08 15:35:22.593996 | orchestrator | Wednesday 08 October 2025 15:35:18 +0000 (0:00:00.149) 0:00:25.462 ***** 2025-10-08 15:35:22.594008 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:35:22.594140 | orchestrator | 2025-10-08 15:35:22.594156 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-10-08 15:35:22.594168 | orchestrator | Wednesday 08 October 2025 15:35:19 +0000 (0:00:00.142) 0:00:25.605 ***** 2025-10-08 15:35:22.594179 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:35:22.594190 | orchestrator | 2025-10-08 15:35:22.594201 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-10-08 15:35:22.594211 | orchestrator | Wednesday 08 October 2025 15:35:19 +0000 (0:00:00.139) 0:00:25.744 ***** 2025-10-08 15:35:22.594222 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:35:22.594233 | orchestrator | 2025-10-08 15:35:22.594264 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-10-08 15:35:22.594275 | orchestrator | Wednesday 08 October 2025 15:35:19 +0000 (0:00:00.273) 0:00:26.018 ***** 2025-10-08 15:35:22.594286 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:35:22.594297 | orchestrator | 2025-10-08 15:35:22.594308 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-10-08 15:35:22.594319 | orchestrator | Wednesday 08 October 2025 15:35:19 +0000 (0:00:00.128) 0:00:26.147 ***** 2025-10-08 15:35:22.594330 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:35:22.594340 | orchestrator | 2025-10-08 15:35:22.594351 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-10-08 15:35:22.594361 | orchestrator | Wednesday 08 October 2025 15:35:19 +0000 (0:00:00.133) 0:00:26.281 ***** 2025-10-08 15:35:22.594372 | orchestrator | ok: [testbed-node-4] => { 2025-10-08 15:35:22.594383 | orchestrator |  "ceph_osd_devices": { 2025-10-08 15:35:22.594394 | orchestrator |  "sdb": { 2025-10-08 15:35:22.594405 | orchestrator |  "osd_lvm_uuid": "7ac75f6e-526f-52f0-b624-7532d6099aef" 2025-10-08 15:35:22.594416 | orchestrator |  }, 2025-10-08 15:35:22.594427 | orchestrator |  "sdc": { 2025-10-08 15:35:22.594448 | orchestrator |  "osd_lvm_uuid": "bafbc9f1-844e-58d3-a294-acb7fdea1516" 2025-10-08 15:35:22.594459 | orchestrator |  } 2025-10-08 15:35:22.594470 | orchestrator |  } 2025-10-08 15:35:22.594480 | orchestrator | } 2025-10-08 15:35:22.594491 | orchestrator | 2025-10-08 15:35:22.594502 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-10-08 15:35:22.594513 | orchestrator | Wednesday 08 October 2025 15:35:19 +0000 (0:00:00.145) 0:00:26.426 ***** 2025-10-08 15:35:22.594524 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:35:22.594535 | orchestrator | 2025-10-08 15:35:22.594552 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-10-08 15:35:22.594564 | orchestrator | Wednesday 08 October 2025 15:35:20 +0000 (0:00:00.134) 0:00:26.561 ***** 2025-10-08 15:35:22.594574 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:35:22.594585 | orchestrator | 2025-10-08 15:35:22.594596 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-10-08 15:35:22.594607 | orchestrator | Wednesday 08 October 2025 15:35:20 +0000 (0:00:00.118) 0:00:26.680 ***** 2025-10-08 15:35:22.594617 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:35:22.594628 | orchestrator | 2025-10-08 15:35:22.594639 | orchestrator | TASK [Print configuration data] ************************************************ 2025-10-08 15:35:22.594649 | orchestrator | Wednesday 08 October 2025 15:35:20 +0000 (0:00:00.118) 0:00:26.798 ***** 2025-10-08 15:35:22.594660 | orchestrator | changed: [testbed-node-4] => { 2025-10-08 15:35:22.594671 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-10-08 15:35:22.594682 | orchestrator |  "ceph_osd_devices": { 2025-10-08 15:35:22.594693 | orchestrator |  "sdb": { 2025-10-08 15:35:22.594704 | orchestrator |  "osd_lvm_uuid": "7ac75f6e-526f-52f0-b624-7532d6099aef" 2025-10-08 15:35:22.594719 | orchestrator |  }, 2025-10-08 15:35:22.594730 | orchestrator |  "sdc": { 2025-10-08 15:35:22.594741 | orchestrator |  "osd_lvm_uuid": "bafbc9f1-844e-58d3-a294-acb7fdea1516" 2025-10-08 15:35:22.594761 | orchestrator |  } 2025-10-08 15:35:22.594773 | orchestrator |  }, 2025-10-08 15:35:22.594784 | orchestrator |  "lvm_volumes": [ 2025-10-08 15:35:22.594795 | orchestrator |  { 2025-10-08 15:35:22.594806 | orchestrator |  "data": "osd-block-7ac75f6e-526f-52f0-b624-7532d6099aef", 2025-10-08 15:35:22.594817 | orchestrator |  "data_vg": "ceph-7ac75f6e-526f-52f0-b624-7532d6099aef" 2025-10-08 15:35:22.594828 | orchestrator |  }, 2025-10-08 15:35:22.594839 | orchestrator |  { 2025-10-08 15:35:22.594850 | orchestrator |  "data": "osd-block-bafbc9f1-844e-58d3-a294-acb7fdea1516", 2025-10-08 15:35:22.594862 | orchestrator |  "data_vg": "ceph-bafbc9f1-844e-58d3-a294-acb7fdea1516" 2025-10-08 15:35:22.594873 | orchestrator |  } 2025-10-08 15:35:22.594884 | orchestrator |  ] 2025-10-08 15:35:22.594895 | orchestrator |  } 2025-10-08 15:35:22.594906 | orchestrator | } 2025-10-08 15:35:22.594917 | orchestrator | 2025-10-08 15:35:22.594928 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-10-08 15:35:22.594939 | orchestrator | Wednesday 08 October 2025 15:35:20 +0000 (0:00:00.169) 0:00:26.968 ***** 2025-10-08 15:35:22.594950 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-10-08 15:35:22.594962 | orchestrator | 2025-10-08 15:35:22.594973 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-10-08 15:35:22.594984 | orchestrator | 2025-10-08 15:35:22.594995 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-10-08 15:35:22.595006 | orchestrator | Wednesday 08 October 2025 15:35:21 +0000 (0:00:00.981) 0:00:27.950 ***** 2025-10-08 15:35:22.595017 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-10-08 15:35:22.595028 | orchestrator | 2025-10-08 15:35:22.595039 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-10-08 15:35:22.595050 | orchestrator | Wednesday 08 October 2025 15:35:21 +0000 (0:00:00.518) 0:00:28.468 ***** 2025-10-08 15:35:22.595069 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:35:22.595098 | orchestrator | 2025-10-08 15:35:22.595109 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-08 15:35:22.595129 | orchestrator | Wednesday 08 October 2025 15:35:22 +0000 (0:00:00.233) 0:00:28.702 ***** 2025-10-08 15:35:22.595140 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-10-08 15:35:22.595151 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-10-08 15:35:22.595162 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-10-08 15:35:22.595173 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-10-08 15:35:22.595184 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-10-08 15:35:22.595195 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-10-08 15:35:22.595214 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-10-08 15:35:30.474381 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-10-08 15:35:30.474494 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-10-08 15:35:30.474509 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-10-08 15:35:30.474521 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-10-08 15:35:30.474532 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-10-08 15:35:30.474543 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-10-08 15:35:30.474555 | orchestrator | 2025-10-08 15:35:30.474566 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-08 15:35:30.474578 | orchestrator | Wednesday 08 October 2025 15:35:22 +0000 (0:00:00.356) 0:00:29.059 ***** 2025-10-08 15:35:30.474589 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:35:30.474601 | orchestrator | 2025-10-08 15:35:30.474613 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-08 15:35:30.474624 | orchestrator | Wednesday 08 October 2025 15:35:22 +0000 (0:00:00.223) 0:00:29.282 ***** 2025-10-08 15:35:30.474635 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:35:30.474646 | orchestrator | 2025-10-08 15:35:30.474657 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-08 15:35:30.474668 | orchestrator | Wednesday 08 October 2025 15:35:23 +0000 (0:00:00.248) 0:00:29.530 ***** 2025-10-08 15:35:30.474679 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:35:30.474690 | orchestrator | 2025-10-08 15:35:30.474701 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-08 15:35:30.474712 | orchestrator | Wednesday 08 October 2025 15:35:23 +0000 (0:00:00.242) 0:00:29.773 ***** 2025-10-08 15:35:30.474723 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:35:30.474733 | orchestrator | 2025-10-08 15:35:30.474744 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-08 15:35:30.474756 | orchestrator | Wednesday 08 October 2025 15:35:23 +0000 (0:00:00.264) 0:00:30.037 ***** 2025-10-08 15:35:30.474766 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:35:30.474777 | orchestrator | 2025-10-08 15:35:30.474788 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-08 15:35:30.474799 | orchestrator | Wednesday 08 October 2025 15:35:23 +0000 (0:00:00.213) 0:00:30.250 ***** 2025-10-08 15:35:30.474810 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:35:30.474821 | orchestrator | 2025-10-08 15:35:30.474832 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-08 15:35:30.474843 | orchestrator | Wednesday 08 October 2025 15:35:23 +0000 (0:00:00.207) 0:00:30.457 ***** 2025-10-08 15:35:30.474854 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:35:30.474891 | orchestrator | 2025-10-08 15:35:30.474903 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-08 15:35:30.474914 | orchestrator | Wednesday 08 October 2025 15:35:24 +0000 (0:00:00.237) 0:00:30.695 ***** 2025-10-08 15:35:30.474925 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:35:30.474936 | orchestrator | 2025-10-08 15:35:30.474962 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-08 15:35:30.474974 | orchestrator | Wednesday 08 October 2025 15:35:24 +0000 (0:00:00.214) 0:00:30.909 ***** 2025-10-08 15:35:30.474985 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_1cc5cf69-2914-42bf-9b1b-88c775b3ec52) 2025-10-08 15:35:30.474998 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_1cc5cf69-2914-42bf-9b1b-88c775b3ec52) 2025-10-08 15:35:30.475009 | orchestrator | 2025-10-08 15:35:30.475020 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-08 15:35:30.475031 | orchestrator | Wednesday 08 October 2025 15:35:25 +0000 (0:00:00.918) 0:00:31.827 ***** 2025-10-08 15:35:30.475042 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_96ffedcc-0414-421a-b44e-b183e9db41fd) 2025-10-08 15:35:30.475053 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_96ffedcc-0414-421a-b44e-b183e9db41fd) 2025-10-08 15:35:30.475064 | orchestrator | 2025-10-08 15:35:30.475099 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-08 15:35:30.475111 | orchestrator | Wednesday 08 October 2025 15:35:25 +0000 (0:00:00.459) 0:00:32.287 ***** 2025-10-08 15:35:30.475122 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_d1c70491-aba4-4ff7-8b88-cbd07cfcddb1) 2025-10-08 15:35:30.475133 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_d1c70491-aba4-4ff7-8b88-cbd07cfcddb1) 2025-10-08 15:35:30.475144 | orchestrator | 2025-10-08 15:35:30.475155 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-08 15:35:30.475166 | orchestrator | Wednesday 08 October 2025 15:35:26 +0000 (0:00:00.454) 0:00:32.741 ***** 2025-10-08 15:35:30.475177 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_1a05d404-282d-4245-ac59-6a85ac73ef0f) 2025-10-08 15:35:30.475188 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_1a05d404-282d-4245-ac59-6a85ac73ef0f) 2025-10-08 15:35:30.475199 | orchestrator | 2025-10-08 15:35:30.475210 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-08 15:35:30.475220 | orchestrator | Wednesday 08 October 2025 15:35:26 +0000 (0:00:00.493) 0:00:33.234 ***** 2025-10-08 15:35:30.475231 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-10-08 15:35:30.475242 | orchestrator | 2025-10-08 15:35:30.475253 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-08 15:35:30.475264 | orchestrator | Wednesday 08 October 2025 15:35:27 +0000 (0:00:00.431) 0:00:33.665 ***** 2025-10-08 15:35:30.475292 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-10-08 15:35:30.475304 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-10-08 15:35:30.475314 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-10-08 15:35:30.475325 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-10-08 15:35:30.475336 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-10-08 15:35:30.475346 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-10-08 15:35:30.475357 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-10-08 15:35:30.475368 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-10-08 15:35:30.475379 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-10-08 15:35:30.475398 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-10-08 15:35:30.475409 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-10-08 15:35:30.475420 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-10-08 15:35:30.475430 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-10-08 15:35:30.475441 | orchestrator | 2025-10-08 15:35:30.475452 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-08 15:35:30.475463 | orchestrator | Wednesday 08 October 2025 15:35:27 +0000 (0:00:00.391) 0:00:34.057 ***** 2025-10-08 15:35:30.475474 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:35:30.475484 | orchestrator | 2025-10-08 15:35:30.475495 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-08 15:35:30.475506 | orchestrator | Wednesday 08 October 2025 15:35:27 +0000 (0:00:00.203) 0:00:34.261 ***** 2025-10-08 15:35:30.475517 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:35:30.475527 | orchestrator | 2025-10-08 15:35:30.475538 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-08 15:35:30.475549 | orchestrator | Wednesday 08 October 2025 15:35:27 +0000 (0:00:00.189) 0:00:34.450 ***** 2025-10-08 15:35:30.475560 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:35:30.475571 | orchestrator | 2025-10-08 15:35:30.475581 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-08 15:35:30.475592 | orchestrator | Wednesday 08 October 2025 15:35:28 +0000 (0:00:00.154) 0:00:34.604 ***** 2025-10-08 15:35:30.475603 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:35:30.475613 | orchestrator | 2025-10-08 15:35:30.475624 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-08 15:35:30.475635 | orchestrator | Wednesday 08 October 2025 15:35:28 +0000 (0:00:00.135) 0:00:34.739 ***** 2025-10-08 15:35:30.475645 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:35:30.475656 | orchestrator | 2025-10-08 15:35:30.475667 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-08 15:35:30.475677 | orchestrator | Wednesday 08 October 2025 15:35:28 +0000 (0:00:00.143) 0:00:34.882 ***** 2025-10-08 15:35:30.475688 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:35:30.475699 | orchestrator | 2025-10-08 15:35:30.475710 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-08 15:35:30.475720 | orchestrator | Wednesday 08 October 2025 15:35:28 +0000 (0:00:00.423) 0:00:35.306 ***** 2025-10-08 15:35:30.475731 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:35:30.475742 | orchestrator | 2025-10-08 15:35:30.475752 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-08 15:35:30.475763 | orchestrator | Wednesday 08 October 2025 15:35:28 +0000 (0:00:00.171) 0:00:35.477 ***** 2025-10-08 15:35:30.475774 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:35:30.475784 | orchestrator | 2025-10-08 15:35:30.475795 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-08 15:35:30.475806 | orchestrator | Wednesday 08 October 2025 15:35:29 +0000 (0:00:00.202) 0:00:35.680 ***** 2025-10-08 15:35:30.475817 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-10-08 15:35:30.475828 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-10-08 15:35:30.475839 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-10-08 15:35:30.475850 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-10-08 15:35:30.475861 | orchestrator | 2025-10-08 15:35:30.475872 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-08 15:35:30.475882 | orchestrator | Wednesday 08 October 2025 15:35:29 +0000 (0:00:00.606) 0:00:36.287 ***** 2025-10-08 15:35:30.475893 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:35:30.475904 | orchestrator | 2025-10-08 15:35:30.475915 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-08 15:35:30.475933 | orchestrator | Wednesday 08 October 2025 15:35:30 +0000 (0:00:00.203) 0:00:36.490 ***** 2025-10-08 15:35:30.475943 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:35:30.475954 | orchestrator | 2025-10-08 15:35:30.475965 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-08 15:35:30.475976 | orchestrator | Wednesday 08 October 2025 15:35:30 +0000 (0:00:00.160) 0:00:36.651 ***** 2025-10-08 15:35:30.475987 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:35:30.475997 | orchestrator | 2025-10-08 15:35:30.476008 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-08 15:35:30.476019 | orchestrator | Wednesday 08 October 2025 15:35:30 +0000 (0:00:00.145) 0:00:36.797 ***** 2025-10-08 15:35:30.476035 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:35:30.476046 | orchestrator | 2025-10-08 15:35:30.476057 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-10-08 15:35:30.476101 | orchestrator | Wednesday 08 October 2025 15:35:30 +0000 (0:00:00.148) 0:00:36.945 ***** 2025-10-08 15:35:34.240593 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2025-10-08 15:35:34.240694 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2025-10-08 15:35:34.240708 | orchestrator | 2025-10-08 15:35:34.240720 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-10-08 15:35:34.240731 | orchestrator | Wednesday 08 October 2025 15:35:30 +0000 (0:00:00.125) 0:00:37.071 ***** 2025-10-08 15:35:34.240742 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:35:34.240753 | orchestrator | 2025-10-08 15:35:34.240764 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-10-08 15:35:34.240775 | orchestrator | Wednesday 08 October 2025 15:35:30 +0000 (0:00:00.113) 0:00:37.184 ***** 2025-10-08 15:35:34.240786 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:35:34.240797 | orchestrator | 2025-10-08 15:35:34.240808 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-10-08 15:35:34.240818 | orchestrator | Wednesday 08 October 2025 15:35:30 +0000 (0:00:00.155) 0:00:37.339 ***** 2025-10-08 15:35:34.240829 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:35:34.240840 | orchestrator | 2025-10-08 15:35:34.240850 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-10-08 15:35:34.240861 | orchestrator | Wednesday 08 October 2025 15:35:31 +0000 (0:00:00.288) 0:00:37.628 ***** 2025-10-08 15:35:34.240872 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:35:34.240883 | orchestrator | 2025-10-08 15:35:34.240894 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-10-08 15:35:34.240905 | orchestrator | Wednesday 08 October 2025 15:35:31 +0000 (0:00:00.120) 0:00:37.748 ***** 2025-10-08 15:35:34.240918 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '93919d76-3b82-5996-a675-e75a55626347'}}) 2025-10-08 15:35:34.240929 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'cead9db5-2c40-515a-bcee-782342d5bd60'}}) 2025-10-08 15:35:34.240940 | orchestrator | 2025-10-08 15:35:34.240951 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-10-08 15:35:34.240962 | orchestrator | Wednesday 08 October 2025 15:35:31 +0000 (0:00:00.189) 0:00:37.938 ***** 2025-10-08 15:35:34.240974 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '93919d76-3b82-5996-a675-e75a55626347'}})  2025-10-08 15:35:34.240986 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'cead9db5-2c40-515a-bcee-782342d5bd60'}})  2025-10-08 15:35:34.240997 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:35:34.241008 | orchestrator | 2025-10-08 15:35:34.241036 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-10-08 15:35:34.241047 | orchestrator | Wednesday 08 October 2025 15:35:31 +0000 (0:00:00.171) 0:00:38.109 ***** 2025-10-08 15:35:34.241059 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '93919d76-3b82-5996-a675-e75a55626347'}})  2025-10-08 15:35:34.241133 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'cead9db5-2c40-515a-bcee-782342d5bd60'}})  2025-10-08 15:35:34.241148 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:35:34.241160 | orchestrator | 2025-10-08 15:35:34.241172 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-10-08 15:35:34.241183 | orchestrator | Wednesday 08 October 2025 15:35:31 +0000 (0:00:00.172) 0:00:38.282 ***** 2025-10-08 15:35:34.241196 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '93919d76-3b82-5996-a675-e75a55626347'}})  2025-10-08 15:35:34.241212 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'cead9db5-2c40-515a-bcee-782342d5bd60'}})  2025-10-08 15:35:34.241231 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:35:34.241249 | orchestrator | 2025-10-08 15:35:34.241267 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-10-08 15:35:34.241296 | orchestrator | Wednesday 08 October 2025 15:35:31 +0000 (0:00:00.133) 0:00:38.415 ***** 2025-10-08 15:35:34.241315 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:35:34.241333 | orchestrator | 2025-10-08 15:35:34.241353 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-10-08 15:35:34.241367 | orchestrator | Wednesday 08 October 2025 15:35:32 +0000 (0:00:00.124) 0:00:38.540 ***** 2025-10-08 15:35:34.241379 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:35:34.241391 | orchestrator | 2025-10-08 15:35:34.241403 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-10-08 15:35:34.241415 | orchestrator | Wednesday 08 October 2025 15:35:32 +0000 (0:00:00.125) 0:00:38.665 ***** 2025-10-08 15:35:34.241428 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:35:34.241440 | orchestrator | 2025-10-08 15:35:34.241452 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-10-08 15:35:34.241464 | orchestrator | Wednesday 08 October 2025 15:35:32 +0000 (0:00:00.121) 0:00:38.787 ***** 2025-10-08 15:35:34.241474 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:35:34.241485 | orchestrator | 2025-10-08 15:35:34.241496 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-10-08 15:35:34.241507 | orchestrator | Wednesday 08 October 2025 15:35:32 +0000 (0:00:00.106) 0:00:38.894 ***** 2025-10-08 15:35:34.241517 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:35:34.241528 | orchestrator | 2025-10-08 15:35:34.241539 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-10-08 15:35:34.241549 | orchestrator | Wednesday 08 October 2025 15:35:32 +0000 (0:00:00.126) 0:00:39.020 ***** 2025-10-08 15:35:34.241560 | orchestrator | ok: [testbed-node-5] => { 2025-10-08 15:35:34.241571 | orchestrator |  "ceph_osd_devices": { 2025-10-08 15:35:34.241582 | orchestrator |  "sdb": { 2025-10-08 15:35:34.241593 | orchestrator |  "osd_lvm_uuid": "93919d76-3b82-5996-a675-e75a55626347" 2025-10-08 15:35:34.241622 | orchestrator |  }, 2025-10-08 15:35:34.241634 | orchestrator |  "sdc": { 2025-10-08 15:35:34.241645 | orchestrator |  "osd_lvm_uuid": "cead9db5-2c40-515a-bcee-782342d5bd60" 2025-10-08 15:35:34.241656 | orchestrator |  } 2025-10-08 15:35:34.241667 | orchestrator |  } 2025-10-08 15:35:34.241678 | orchestrator | } 2025-10-08 15:35:34.241689 | orchestrator | 2025-10-08 15:35:34.241700 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-10-08 15:35:34.241711 | orchestrator | Wednesday 08 October 2025 15:35:32 +0000 (0:00:00.138) 0:00:39.158 ***** 2025-10-08 15:35:34.241722 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:35:34.241733 | orchestrator | 2025-10-08 15:35:34.241744 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-10-08 15:35:34.241755 | orchestrator | Wednesday 08 October 2025 15:35:32 +0000 (0:00:00.130) 0:00:39.289 ***** 2025-10-08 15:35:34.241766 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:35:34.241777 | orchestrator | 2025-10-08 15:35:34.241787 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-10-08 15:35:34.241869 | orchestrator | Wednesday 08 October 2025 15:35:33 +0000 (0:00:00.281) 0:00:39.571 ***** 2025-10-08 15:35:34.241881 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:35:34.241892 | orchestrator | 2025-10-08 15:35:34.241902 | orchestrator | TASK [Print configuration data] ************************************************ 2025-10-08 15:35:34.241913 | orchestrator | Wednesday 08 October 2025 15:35:33 +0000 (0:00:00.099) 0:00:39.670 ***** 2025-10-08 15:35:34.241924 | orchestrator | changed: [testbed-node-5] => { 2025-10-08 15:35:34.241935 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-10-08 15:35:34.241946 | orchestrator |  "ceph_osd_devices": { 2025-10-08 15:35:34.241957 | orchestrator |  "sdb": { 2025-10-08 15:35:34.241967 | orchestrator |  "osd_lvm_uuid": "93919d76-3b82-5996-a675-e75a55626347" 2025-10-08 15:35:34.241978 | orchestrator |  }, 2025-10-08 15:35:34.241989 | orchestrator |  "sdc": { 2025-10-08 15:35:34.242000 | orchestrator |  "osd_lvm_uuid": "cead9db5-2c40-515a-bcee-782342d5bd60" 2025-10-08 15:35:34.242010 | orchestrator |  } 2025-10-08 15:35:34.242111 | orchestrator |  }, 2025-10-08 15:35:34.242124 | orchestrator |  "lvm_volumes": [ 2025-10-08 15:35:34.242135 | orchestrator |  { 2025-10-08 15:35:34.242146 | orchestrator |  "data": "osd-block-93919d76-3b82-5996-a675-e75a55626347", 2025-10-08 15:35:34.242157 | orchestrator |  "data_vg": "ceph-93919d76-3b82-5996-a675-e75a55626347" 2025-10-08 15:35:34.242168 | orchestrator |  }, 2025-10-08 15:35:34.242179 | orchestrator |  { 2025-10-08 15:35:34.242190 | orchestrator |  "data": "osd-block-cead9db5-2c40-515a-bcee-782342d5bd60", 2025-10-08 15:35:34.242200 | orchestrator |  "data_vg": "ceph-cead9db5-2c40-515a-bcee-782342d5bd60" 2025-10-08 15:35:34.242211 | orchestrator |  } 2025-10-08 15:35:34.242222 | orchestrator |  ] 2025-10-08 15:35:34.242233 | orchestrator |  } 2025-10-08 15:35:34.242248 | orchestrator | } 2025-10-08 15:35:34.242259 | orchestrator | 2025-10-08 15:35:34.242270 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-10-08 15:35:34.242281 | orchestrator | Wednesday 08 October 2025 15:35:33 +0000 (0:00:00.172) 0:00:39.842 ***** 2025-10-08 15:35:34.242292 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-10-08 15:35:34.242303 | orchestrator | 2025-10-08 15:35:34.242313 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-08 15:35:34.242344 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-10-08 15:35:34.242364 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-10-08 15:35:34.242383 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-10-08 15:35:34.242400 | orchestrator | 2025-10-08 15:35:34.242418 | orchestrator | 2025-10-08 15:35:34.242435 | orchestrator | 2025-10-08 15:35:34.242452 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-08 15:35:34.242470 | orchestrator | Wednesday 08 October 2025 15:35:34 +0000 (0:00:00.853) 0:00:40.696 ***** 2025-10-08 15:35:34.242487 | orchestrator | =============================================================================== 2025-10-08 15:35:34.242504 | orchestrator | Write configuration file ------------------------------------------------ 3.48s 2025-10-08 15:35:34.242522 | orchestrator | Add known partitions to the list of available block devices ------------- 1.18s 2025-10-08 15:35:34.242541 | orchestrator | Add known links to the list of available block devices ------------------ 1.09s 2025-10-08 15:35:34.242559 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.98s 2025-10-08 15:35:34.242577 | orchestrator | Add known links to the list of available block devices ------------------ 0.92s 2025-10-08 15:35:34.242599 | orchestrator | Add known links to the list of available block devices ------------------ 0.91s 2025-10-08 15:35:34.242610 | orchestrator | Add known partitions to the list of available block devices ------------- 0.90s 2025-10-08 15:35:34.242620 | orchestrator | Add known partitions to the list of available block devices ------------- 0.88s 2025-10-08 15:35:34.242631 | orchestrator | Add known partitions to the list of available block devices ------------- 0.79s 2025-10-08 15:35:34.242642 | orchestrator | Add known partitions to the list of available block devices ------------- 0.74s 2025-10-08 15:35:34.242653 | orchestrator | Add known links to the list of available block devices ------------------ 0.69s 2025-10-08 15:35:34.242663 | orchestrator | Get initial list of available block devices ----------------------------- 0.67s 2025-10-08 15:35:34.242674 | orchestrator | Print configuration data ------------------------------------------------ 0.65s 2025-10-08 15:35:34.242685 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.63s 2025-10-08 15:35:34.242708 | orchestrator | Add known links to the list of available block devices ------------------ 0.62s 2025-10-08 15:35:34.478976 | orchestrator | Add known partitions to the list of available block devices ------------- 0.61s 2025-10-08 15:35:34.479067 | orchestrator | Generate shared DB/WAL VG names ----------------------------------------- 0.57s 2025-10-08 15:35:34.479133 | orchestrator | Print DB devices -------------------------------------------------------- 0.54s 2025-10-08 15:35:34.479145 | orchestrator | Generate lvm_volumes structure (block only) ----------------------------- 0.54s 2025-10-08 15:35:34.479156 | orchestrator | Set DB devices config data ---------------------------------------------- 0.52s 2025-10-08 15:35:57.566288 | orchestrator | 2025-10-08 15:35:57 | INFO  | Task 68695271-27e1-4876-85d5-6357fdbaa365 (sync inventory) is running in background. Output coming soon. 2025-10-08 15:36:21.873933 | orchestrator | 2025-10-08 15:35:58 | INFO  | Starting group_vars file reorganization 2025-10-08 15:36:21.874113 | orchestrator | 2025-10-08 15:35:58 | INFO  | Moved 0 file(s) to their respective directories 2025-10-08 15:36:21.874133 | orchestrator | 2025-10-08 15:35:58 | INFO  | Group_vars file reorganization completed 2025-10-08 15:36:21.874145 | orchestrator | 2025-10-08 15:36:01 | INFO  | Starting variable preparation from inventory 2025-10-08 15:36:21.874157 | orchestrator | 2025-10-08 15:36:04 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2025-10-08 15:36:21.874168 | orchestrator | 2025-10-08 15:36:04 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2025-10-08 15:36:21.874179 | orchestrator | 2025-10-08 15:36:04 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2025-10-08 15:36:21.874190 | orchestrator | 2025-10-08 15:36:04 | INFO  | 3 file(s) written, 6 host(s) processed 2025-10-08 15:36:21.874201 | orchestrator | 2025-10-08 15:36:04 | INFO  | Variable preparation completed 2025-10-08 15:36:21.874212 | orchestrator | 2025-10-08 15:36:06 | INFO  | Starting inventory overwrite handling 2025-10-08 15:36:21.874224 | orchestrator | 2025-10-08 15:36:06 | INFO  | Handling group overwrites in 99-overwrite 2025-10-08 15:36:21.874236 | orchestrator | 2025-10-08 15:36:06 | INFO  | Removing group frr:children from 60-generic 2025-10-08 15:36:21.874247 | orchestrator | 2025-10-08 15:36:06 | INFO  | Removing group storage:children from 50-kolla 2025-10-08 15:36:21.874258 | orchestrator | 2025-10-08 15:36:06 | INFO  | Removing group netbird:children from 50-infrastructure 2025-10-08 15:36:21.874269 | orchestrator | 2025-10-08 15:36:06 | INFO  | Removing group ceph-rgw from 50-ceph 2025-10-08 15:36:21.874281 | orchestrator | 2025-10-08 15:36:06 | INFO  | Removing group ceph-mds from 50-ceph 2025-10-08 15:36:21.874292 | orchestrator | 2025-10-08 15:36:06 | INFO  | Handling group overwrites in 20-roles 2025-10-08 15:36:21.874302 | orchestrator | 2025-10-08 15:36:06 | INFO  | Removing group k3s_node from 50-infrastructure 2025-10-08 15:36:21.874341 | orchestrator | 2025-10-08 15:36:06 | INFO  | Removed 6 group(s) in total 2025-10-08 15:36:21.874353 | orchestrator | 2025-10-08 15:36:06 | INFO  | Inventory overwrite handling completed 2025-10-08 15:36:21.874364 | orchestrator | 2025-10-08 15:36:07 | INFO  | Starting merge of inventory files 2025-10-08 15:36:21.874375 | orchestrator | 2025-10-08 15:36:07 | INFO  | Inventory files merged successfully 2025-10-08 15:36:21.874385 | orchestrator | 2025-10-08 15:36:11 | INFO  | Generating ClusterShell configuration from Ansible inventory 2025-10-08 15:36:21.874396 | orchestrator | 2025-10-08 15:36:20 | INFO  | Successfully wrote ClusterShell configuration 2025-10-08 15:36:21.874408 | orchestrator | [master 700c8af] 2025-10-08-15-36 2025-10-08 15:36:21.874420 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2025-10-08 15:36:24.238001 | orchestrator | 2025-10-08 15:36:24 | INFO  | Task 5435a957-0eba-49ca-a9f7-4ed4c9afe4cb (ceph-create-lvm-devices) was prepared for execution. 2025-10-08 15:36:24.238185 | orchestrator | 2025-10-08 15:36:24 | INFO  | It takes a moment until task 5435a957-0eba-49ca-a9f7-4ed4c9afe4cb (ceph-create-lvm-devices) has been started and output is visible here. 2025-10-08 15:36:36.716768 | orchestrator | 2025-10-08 15:36:36.716880 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-10-08 15:36:36.716897 | orchestrator | 2025-10-08 15:36:36.716909 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-10-08 15:36:36.716922 | orchestrator | Wednesday 08 October 2025 15:36:28 +0000 (0:00:00.307) 0:00:00.307 ***** 2025-10-08 15:36:36.716933 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-10-08 15:36:36.716945 | orchestrator | 2025-10-08 15:36:36.716956 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-10-08 15:36:36.716967 | orchestrator | Wednesday 08 October 2025 15:36:28 +0000 (0:00:00.239) 0:00:00.547 ***** 2025-10-08 15:36:36.716978 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:36:36.716990 | orchestrator | 2025-10-08 15:36:36.717002 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-08 15:36:36.717013 | orchestrator | Wednesday 08 October 2025 15:36:29 +0000 (0:00:00.229) 0:00:00.777 ***** 2025-10-08 15:36:36.717024 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-10-08 15:36:36.717091 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-10-08 15:36:36.717103 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-10-08 15:36:36.717114 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-10-08 15:36:36.717125 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-10-08 15:36:36.717135 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-10-08 15:36:36.717146 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-10-08 15:36:36.717157 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-10-08 15:36:36.717168 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-10-08 15:36:36.717178 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-10-08 15:36:36.717189 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-10-08 15:36:36.717200 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-10-08 15:36:36.717211 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-10-08 15:36:36.717221 | orchestrator | 2025-10-08 15:36:36.717232 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-08 15:36:36.717270 | orchestrator | Wednesday 08 October 2025 15:36:29 +0000 (0:00:00.567) 0:00:01.345 ***** 2025-10-08 15:36:36.717282 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:36:36.717293 | orchestrator | 2025-10-08 15:36:36.717306 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-08 15:36:36.717334 | orchestrator | Wednesday 08 October 2025 15:36:29 +0000 (0:00:00.229) 0:00:01.574 ***** 2025-10-08 15:36:36.717347 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:36:36.717359 | orchestrator | 2025-10-08 15:36:36.717371 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-08 15:36:36.717383 | orchestrator | Wednesday 08 October 2025 15:36:30 +0000 (0:00:00.197) 0:00:01.772 ***** 2025-10-08 15:36:36.717403 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:36:36.717416 | orchestrator | 2025-10-08 15:36:36.717428 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-08 15:36:36.717440 | orchestrator | Wednesday 08 October 2025 15:36:30 +0000 (0:00:00.219) 0:00:01.991 ***** 2025-10-08 15:36:36.717452 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:36:36.717464 | orchestrator | 2025-10-08 15:36:36.717476 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-08 15:36:36.717488 | orchestrator | Wednesday 08 October 2025 15:36:30 +0000 (0:00:00.187) 0:00:02.179 ***** 2025-10-08 15:36:36.717500 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:36:36.717511 | orchestrator | 2025-10-08 15:36:36.717523 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-08 15:36:36.717535 | orchestrator | Wednesday 08 October 2025 15:36:30 +0000 (0:00:00.203) 0:00:02.382 ***** 2025-10-08 15:36:36.717547 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:36:36.717559 | orchestrator | 2025-10-08 15:36:36.717571 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-08 15:36:36.717583 | orchestrator | Wednesday 08 October 2025 15:36:30 +0000 (0:00:00.188) 0:00:02.570 ***** 2025-10-08 15:36:36.717596 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:36:36.717608 | orchestrator | 2025-10-08 15:36:36.717620 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-08 15:36:36.717632 | orchestrator | Wednesday 08 October 2025 15:36:31 +0000 (0:00:00.223) 0:00:02.793 ***** 2025-10-08 15:36:36.717643 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:36:36.717655 | orchestrator | 2025-10-08 15:36:36.717666 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-08 15:36:36.717677 | orchestrator | Wednesday 08 October 2025 15:36:31 +0000 (0:00:00.223) 0:00:03.016 ***** 2025-10-08 15:36:36.717688 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_d5106b6d-a2b1-46e7-8a70-2ffa10fa4fd8) 2025-10-08 15:36:36.717700 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_d5106b6d-a2b1-46e7-8a70-2ffa10fa4fd8) 2025-10-08 15:36:36.717711 | orchestrator | 2025-10-08 15:36:36.717722 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-08 15:36:36.717733 | orchestrator | Wednesday 08 October 2025 15:36:31 +0000 (0:00:00.417) 0:00:03.434 ***** 2025-10-08 15:36:36.717762 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_e7b164cd-18c4-4443-b153-66ef822cc182) 2025-10-08 15:36:36.717775 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_e7b164cd-18c4-4443-b153-66ef822cc182) 2025-10-08 15:36:36.717786 | orchestrator | 2025-10-08 15:36:36.717797 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-08 15:36:36.717808 | orchestrator | Wednesday 08 October 2025 15:36:32 +0000 (0:00:00.628) 0:00:04.063 ***** 2025-10-08 15:36:36.717819 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_501046cd-0181-4267-8e39-455d7db25dff) 2025-10-08 15:36:36.717830 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_501046cd-0181-4267-8e39-455d7db25dff) 2025-10-08 15:36:36.717840 | orchestrator | 2025-10-08 15:36:36.717851 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-08 15:36:36.717872 | orchestrator | Wednesday 08 October 2025 15:36:33 +0000 (0:00:00.699) 0:00:04.762 ***** 2025-10-08 15:36:36.717882 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_b2b0a7c4-684b-467f-bea3-a2180df0d298) 2025-10-08 15:36:36.717893 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_b2b0a7c4-684b-467f-bea3-a2180df0d298) 2025-10-08 15:36:36.717904 | orchestrator | 2025-10-08 15:36:36.717915 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-08 15:36:36.717926 | orchestrator | Wednesday 08 October 2025 15:36:34 +0000 (0:00:00.898) 0:00:05.661 ***** 2025-10-08 15:36:36.717937 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-10-08 15:36:36.717948 | orchestrator | 2025-10-08 15:36:36.717959 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-08 15:36:36.717969 | orchestrator | Wednesday 08 October 2025 15:36:34 +0000 (0:00:00.351) 0:00:06.012 ***** 2025-10-08 15:36:36.717980 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-10-08 15:36:36.717991 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-10-08 15:36:36.718001 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-10-08 15:36:36.718012 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-10-08 15:36:36.718161 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-10-08 15:36:36.718174 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-10-08 15:36:36.718184 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-10-08 15:36:36.718195 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-10-08 15:36:36.718206 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-10-08 15:36:36.718216 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-10-08 15:36:36.718227 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-10-08 15:36:36.718238 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-10-08 15:36:36.718248 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-10-08 15:36:36.718259 | orchestrator | 2025-10-08 15:36:36.718270 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-08 15:36:36.718281 | orchestrator | Wednesday 08 October 2025 15:36:34 +0000 (0:00:00.503) 0:00:06.516 ***** 2025-10-08 15:36:36.718292 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:36:36.718303 | orchestrator | 2025-10-08 15:36:36.718314 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-08 15:36:36.718324 | orchestrator | Wednesday 08 October 2025 15:36:35 +0000 (0:00:00.215) 0:00:06.731 ***** 2025-10-08 15:36:36.718335 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:36:36.718346 | orchestrator | 2025-10-08 15:36:36.718356 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-08 15:36:36.718367 | orchestrator | Wednesday 08 October 2025 15:36:35 +0000 (0:00:00.225) 0:00:06.957 ***** 2025-10-08 15:36:36.718378 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:36:36.718388 | orchestrator | 2025-10-08 15:36:36.718399 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-08 15:36:36.718410 | orchestrator | Wednesday 08 October 2025 15:36:35 +0000 (0:00:00.209) 0:00:07.167 ***** 2025-10-08 15:36:36.718420 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:36:36.718431 | orchestrator | 2025-10-08 15:36:36.718442 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-08 15:36:36.718464 | orchestrator | Wednesday 08 October 2025 15:36:35 +0000 (0:00:00.222) 0:00:07.389 ***** 2025-10-08 15:36:36.718475 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:36:36.718486 | orchestrator | 2025-10-08 15:36:36.718497 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-08 15:36:36.718507 | orchestrator | Wednesday 08 October 2025 15:36:35 +0000 (0:00:00.212) 0:00:07.602 ***** 2025-10-08 15:36:36.718518 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:36:36.718529 | orchestrator | 2025-10-08 15:36:36.718540 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-08 15:36:36.718551 | orchestrator | Wednesday 08 October 2025 15:36:36 +0000 (0:00:00.277) 0:00:07.879 ***** 2025-10-08 15:36:36.718562 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:36:36.718573 | orchestrator | 2025-10-08 15:36:36.718584 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-08 15:36:36.718595 | orchestrator | Wednesday 08 October 2025 15:36:36 +0000 (0:00:00.245) 0:00:08.125 ***** 2025-10-08 15:36:36.718614 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:36:45.257634 | orchestrator | 2025-10-08 15:36:45.257732 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-08 15:36:45.257747 | orchestrator | Wednesday 08 October 2025 15:36:36 +0000 (0:00:00.216) 0:00:08.341 ***** 2025-10-08 15:36:45.257757 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-10-08 15:36:45.257768 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-10-08 15:36:45.257778 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-10-08 15:36:45.257787 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-10-08 15:36:45.257796 | orchestrator | 2025-10-08 15:36:45.257806 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-08 15:36:45.257815 | orchestrator | Wednesday 08 October 2025 15:36:37 +0000 (0:00:01.145) 0:00:09.487 ***** 2025-10-08 15:36:45.257824 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:36:45.257833 | orchestrator | 2025-10-08 15:36:45.257842 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-08 15:36:45.257852 | orchestrator | Wednesday 08 October 2025 15:36:38 +0000 (0:00:00.215) 0:00:09.703 ***** 2025-10-08 15:36:45.257861 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:36:45.257869 | orchestrator | 2025-10-08 15:36:45.257878 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-08 15:36:45.257887 | orchestrator | Wednesday 08 October 2025 15:36:38 +0000 (0:00:00.209) 0:00:09.912 ***** 2025-10-08 15:36:45.257896 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:36:45.257905 | orchestrator | 2025-10-08 15:36:45.257914 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-08 15:36:45.257923 | orchestrator | Wednesday 08 October 2025 15:36:38 +0000 (0:00:00.216) 0:00:10.128 ***** 2025-10-08 15:36:45.257932 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:36:45.257941 | orchestrator | 2025-10-08 15:36:45.257950 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-10-08 15:36:45.257959 | orchestrator | Wednesday 08 October 2025 15:36:38 +0000 (0:00:00.228) 0:00:10.357 ***** 2025-10-08 15:36:45.257968 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:36:45.257996 | orchestrator | 2025-10-08 15:36:45.258006 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-10-08 15:36:45.258095 | orchestrator | Wednesday 08 October 2025 15:36:38 +0000 (0:00:00.154) 0:00:10.511 ***** 2025-10-08 15:36:45.258111 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '25f30e7b-7b9e-5d46-b3fc-d4cb59f24626'}}) 2025-10-08 15:36:45.258120 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ff85ad2a-1d5d-50f9-b3a7-2f1eee54f485'}}) 2025-10-08 15:36:45.258129 | orchestrator | 2025-10-08 15:36:45.258138 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-10-08 15:36:45.258147 | orchestrator | Wednesday 08 October 2025 15:36:39 +0000 (0:00:00.202) 0:00:10.714 ***** 2025-10-08 15:36:45.258195 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-25f30e7b-7b9e-5d46-b3fc-d4cb59f24626', 'data_vg': 'ceph-25f30e7b-7b9e-5d46-b3fc-d4cb59f24626'}) 2025-10-08 15:36:45.258207 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-ff85ad2a-1d5d-50f9-b3a7-2f1eee54f485', 'data_vg': 'ceph-ff85ad2a-1d5d-50f9-b3a7-2f1eee54f485'}) 2025-10-08 15:36:45.258218 | orchestrator | 2025-10-08 15:36:45.258243 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-10-08 15:36:45.258259 | orchestrator | Wednesday 08 October 2025 15:36:41 +0000 (0:00:01.999) 0:00:12.713 ***** 2025-10-08 15:36:45.258269 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-25f30e7b-7b9e-5d46-b3fc-d4cb59f24626', 'data_vg': 'ceph-25f30e7b-7b9e-5d46-b3fc-d4cb59f24626'})  2025-10-08 15:36:45.258281 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ff85ad2a-1d5d-50f9-b3a7-2f1eee54f485', 'data_vg': 'ceph-ff85ad2a-1d5d-50f9-b3a7-2f1eee54f485'})  2025-10-08 15:36:45.258291 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:36:45.258301 | orchestrator | 2025-10-08 15:36:45.258311 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-10-08 15:36:45.258320 | orchestrator | Wednesday 08 October 2025 15:36:41 +0000 (0:00:00.218) 0:00:12.932 ***** 2025-10-08 15:36:45.258331 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-25f30e7b-7b9e-5d46-b3fc-d4cb59f24626', 'data_vg': 'ceph-25f30e7b-7b9e-5d46-b3fc-d4cb59f24626'}) 2025-10-08 15:36:45.258341 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-ff85ad2a-1d5d-50f9-b3a7-2f1eee54f485', 'data_vg': 'ceph-ff85ad2a-1d5d-50f9-b3a7-2f1eee54f485'}) 2025-10-08 15:36:45.258350 | orchestrator | 2025-10-08 15:36:45.258360 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-10-08 15:36:45.258370 | orchestrator | Wednesday 08 October 2025 15:36:42 +0000 (0:00:01.483) 0:00:14.416 ***** 2025-10-08 15:36:45.258380 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-25f30e7b-7b9e-5d46-b3fc-d4cb59f24626', 'data_vg': 'ceph-25f30e7b-7b9e-5d46-b3fc-d4cb59f24626'})  2025-10-08 15:36:45.258390 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ff85ad2a-1d5d-50f9-b3a7-2f1eee54f485', 'data_vg': 'ceph-ff85ad2a-1d5d-50f9-b3a7-2f1eee54f485'})  2025-10-08 15:36:45.258400 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:36:45.258410 | orchestrator | 2025-10-08 15:36:45.258420 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-10-08 15:36:45.258430 | orchestrator | Wednesday 08 October 2025 15:36:42 +0000 (0:00:00.170) 0:00:14.587 ***** 2025-10-08 15:36:45.258440 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:36:45.258449 | orchestrator | 2025-10-08 15:36:45.258459 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-10-08 15:36:45.258486 | orchestrator | Wednesday 08 October 2025 15:36:43 +0000 (0:00:00.143) 0:00:14.730 ***** 2025-10-08 15:36:45.258497 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-25f30e7b-7b9e-5d46-b3fc-d4cb59f24626', 'data_vg': 'ceph-25f30e7b-7b9e-5d46-b3fc-d4cb59f24626'})  2025-10-08 15:36:45.258507 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ff85ad2a-1d5d-50f9-b3a7-2f1eee54f485', 'data_vg': 'ceph-ff85ad2a-1d5d-50f9-b3a7-2f1eee54f485'})  2025-10-08 15:36:45.258517 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:36:45.258527 | orchestrator | 2025-10-08 15:36:45.258537 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-10-08 15:36:45.258546 | orchestrator | Wednesday 08 October 2025 15:36:43 +0000 (0:00:00.367) 0:00:15.097 ***** 2025-10-08 15:36:45.258555 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:36:45.258564 | orchestrator | 2025-10-08 15:36:45.258572 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-10-08 15:36:45.258581 | orchestrator | Wednesday 08 October 2025 15:36:43 +0000 (0:00:00.164) 0:00:15.262 ***** 2025-10-08 15:36:45.258590 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-25f30e7b-7b9e-5d46-b3fc-d4cb59f24626', 'data_vg': 'ceph-25f30e7b-7b9e-5d46-b3fc-d4cb59f24626'})  2025-10-08 15:36:45.258607 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ff85ad2a-1d5d-50f9-b3a7-2f1eee54f485', 'data_vg': 'ceph-ff85ad2a-1d5d-50f9-b3a7-2f1eee54f485'})  2025-10-08 15:36:45.258616 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:36:45.258625 | orchestrator | 2025-10-08 15:36:45.258634 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-10-08 15:36:45.258643 | orchestrator | Wednesday 08 October 2025 15:36:43 +0000 (0:00:00.172) 0:00:15.435 ***** 2025-10-08 15:36:45.258651 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:36:45.258660 | orchestrator | 2025-10-08 15:36:45.258669 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-10-08 15:36:45.258678 | orchestrator | Wednesday 08 October 2025 15:36:44 +0000 (0:00:00.237) 0:00:15.672 ***** 2025-10-08 15:36:45.258687 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-25f30e7b-7b9e-5d46-b3fc-d4cb59f24626', 'data_vg': 'ceph-25f30e7b-7b9e-5d46-b3fc-d4cb59f24626'})  2025-10-08 15:36:45.258696 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ff85ad2a-1d5d-50f9-b3a7-2f1eee54f485', 'data_vg': 'ceph-ff85ad2a-1d5d-50f9-b3a7-2f1eee54f485'})  2025-10-08 15:36:45.258705 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:36:45.258714 | orchestrator | 2025-10-08 15:36:45.258723 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-10-08 15:36:45.258732 | orchestrator | Wednesday 08 October 2025 15:36:44 +0000 (0:00:00.172) 0:00:15.845 ***** 2025-10-08 15:36:45.258741 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:36:45.258750 | orchestrator | 2025-10-08 15:36:45.258759 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-10-08 15:36:45.258768 | orchestrator | Wednesday 08 October 2025 15:36:44 +0000 (0:00:00.149) 0:00:15.995 ***** 2025-10-08 15:36:45.258781 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-25f30e7b-7b9e-5d46-b3fc-d4cb59f24626', 'data_vg': 'ceph-25f30e7b-7b9e-5d46-b3fc-d4cb59f24626'})  2025-10-08 15:36:45.258790 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ff85ad2a-1d5d-50f9-b3a7-2f1eee54f485', 'data_vg': 'ceph-ff85ad2a-1d5d-50f9-b3a7-2f1eee54f485'})  2025-10-08 15:36:45.258799 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:36:45.258808 | orchestrator | 2025-10-08 15:36:45.258817 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-10-08 15:36:45.258826 | orchestrator | Wednesday 08 October 2025 15:36:44 +0000 (0:00:00.165) 0:00:16.160 ***** 2025-10-08 15:36:45.258835 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-25f30e7b-7b9e-5d46-b3fc-d4cb59f24626', 'data_vg': 'ceph-25f30e7b-7b9e-5d46-b3fc-d4cb59f24626'})  2025-10-08 15:36:45.258844 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ff85ad2a-1d5d-50f9-b3a7-2f1eee54f485', 'data_vg': 'ceph-ff85ad2a-1d5d-50f9-b3a7-2f1eee54f485'})  2025-10-08 15:36:45.258853 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:36:45.258862 | orchestrator | 2025-10-08 15:36:45.258871 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-10-08 15:36:45.258880 | orchestrator | Wednesday 08 October 2025 15:36:44 +0000 (0:00:00.226) 0:00:16.387 ***** 2025-10-08 15:36:45.258889 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-25f30e7b-7b9e-5d46-b3fc-d4cb59f24626', 'data_vg': 'ceph-25f30e7b-7b9e-5d46-b3fc-d4cb59f24626'})  2025-10-08 15:36:45.258898 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ff85ad2a-1d5d-50f9-b3a7-2f1eee54f485', 'data_vg': 'ceph-ff85ad2a-1d5d-50f9-b3a7-2f1eee54f485'})  2025-10-08 15:36:45.258907 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:36:45.258916 | orchestrator | 2025-10-08 15:36:45.258925 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-10-08 15:36:45.258934 | orchestrator | Wednesday 08 October 2025 15:36:44 +0000 (0:00:00.190) 0:00:16.578 ***** 2025-10-08 15:36:45.258942 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:36:45.258957 | orchestrator | 2025-10-08 15:36:45.258966 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-10-08 15:36:45.258975 | orchestrator | Wednesday 08 October 2025 15:36:45 +0000 (0:00:00.160) 0:00:16.738 ***** 2025-10-08 15:36:45.258984 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:36:45.258992 | orchestrator | 2025-10-08 15:36:45.259007 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-10-08 15:36:51.689840 | orchestrator | Wednesday 08 October 2025 15:36:45 +0000 (0:00:00.142) 0:00:16.880 ***** 2025-10-08 15:36:51.689951 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:36:51.689968 | orchestrator | 2025-10-08 15:36:51.689980 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-10-08 15:36:51.689991 | orchestrator | Wednesday 08 October 2025 15:36:45 +0000 (0:00:00.174) 0:00:17.055 ***** 2025-10-08 15:36:51.690002 | orchestrator | ok: [testbed-node-3] => { 2025-10-08 15:36:51.690014 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-10-08 15:36:51.690131 | orchestrator | } 2025-10-08 15:36:51.690143 | orchestrator | 2025-10-08 15:36:51.690155 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-10-08 15:36:51.690166 | orchestrator | Wednesday 08 October 2025 15:36:45 +0000 (0:00:00.335) 0:00:17.391 ***** 2025-10-08 15:36:51.690177 | orchestrator | ok: [testbed-node-3] => { 2025-10-08 15:36:51.690188 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-10-08 15:36:51.690199 | orchestrator | } 2025-10-08 15:36:51.690211 | orchestrator | 2025-10-08 15:36:51.690222 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-10-08 15:36:51.690233 | orchestrator | Wednesday 08 October 2025 15:36:45 +0000 (0:00:00.164) 0:00:17.555 ***** 2025-10-08 15:36:51.690244 | orchestrator | ok: [testbed-node-3] => { 2025-10-08 15:36:51.690256 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-10-08 15:36:51.690267 | orchestrator | } 2025-10-08 15:36:51.690279 | orchestrator | 2025-10-08 15:36:51.690291 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-10-08 15:36:51.690302 | orchestrator | Wednesday 08 October 2025 15:36:46 +0000 (0:00:00.145) 0:00:17.701 ***** 2025-10-08 15:36:51.690313 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:36:51.690324 | orchestrator | 2025-10-08 15:36:51.690335 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-10-08 15:36:51.690346 | orchestrator | Wednesday 08 October 2025 15:36:46 +0000 (0:00:00.713) 0:00:18.414 ***** 2025-10-08 15:36:51.690357 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:36:51.690370 | orchestrator | 2025-10-08 15:36:51.690382 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-10-08 15:36:51.690394 | orchestrator | Wednesday 08 October 2025 15:36:47 +0000 (0:00:00.517) 0:00:18.932 ***** 2025-10-08 15:36:51.690406 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:36:51.690419 | orchestrator | 2025-10-08 15:36:51.690431 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-10-08 15:36:51.690443 | orchestrator | Wednesday 08 October 2025 15:36:47 +0000 (0:00:00.506) 0:00:19.439 ***** 2025-10-08 15:36:51.690455 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:36:51.690467 | orchestrator | 2025-10-08 15:36:51.690479 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-10-08 15:36:51.690491 | orchestrator | Wednesday 08 October 2025 15:36:47 +0000 (0:00:00.128) 0:00:19.568 ***** 2025-10-08 15:36:51.690503 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:36:51.690516 | orchestrator | 2025-10-08 15:36:51.690528 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-10-08 15:36:51.690540 | orchestrator | Wednesday 08 October 2025 15:36:48 +0000 (0:00:00.129) 0:00:19.698 ***** 2025-10-08 15:36:51.690552 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:36:51.690578 | orchestrator | 2025-10-08 15:36:51.690600 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-10-08 15:36:51.690613 | orchestrator | Wednesday 08 October 2025 15:36:48 +0000 (0:00:00.113) 0:00:19.811 ***** 2025-10-08 15:36:51.690650 | orchestrator | ok: [testbed-node-3] => { 2025-10-08 15:36:51.690663 | orchestrator |  "vgs_report": { 2025-10-08 15:36:51.690676 | orchestrator |  "vg": [] 2025-10-08 15:36:51.690687 | orchestrator |  } 2025-10-08 15:36:51.690699 | orchestrator | } 2025-10-08 15:36:51.690711 | orchestrator | 2025-10-08 15:36:51.690723 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-10-08 15:36:51.690734 | orchestrator | Wednesday 08 October 2025 15:36:48 +0000 (0:00:00.144) 0:00:19.956 ***** 2025-10-08 15:36:51.690745 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:36:51.690756 | orchestrator | 2025-10-08 15:36:51.690767 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-10-08 15:36:51.690778 | orchestrator | Wednesday 08 October 2025 15:36:48 +0000 (0:00:00.144) 0:00:20.100 ***** 2025-10-08 15:36:51.690789 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:36:51.690800 | orchestrator | 2025-10-08 15:36:51.690811 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-10-08 15:36:51.690821 | orchestrator | Wednesday 08 October 2025 15:36:48 +0000 (0:00:00.132) 0:00:20.233 ***** 2025-10-08 15:36:51.690832 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:36:51.690843 | orchestrator | 2025-10-08 15:36:51.690853 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-10-08 15:36:51.690864 | orchestrator | Wednesday 08 October 2025 15:36:48 +0000 (0:00:00.343) 0:00:20.576 ***** 2025-10-08 15:36:51.690875 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:36:51.690886 | orchestrator | 2025-10-08 15:36:51.690896 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-10-08 15:36:51.690907 | orchestrator | Wednesday 08 October 2025 15:36:49 +0000 (0:00:00.145) 0:00:20.722 ***** 2025-10-08 15:36:51.690918 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:36:51.690929 | orchestrator | 2025-10-08 15:36:51.690956 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-10-08 15:36:51.690967 | orchestrator | Wednesday 08 October 2025 15:36:49 +0000 (0:00:00.145) 0:00:20.867 ***** 2025-10-08 15:36:51.690978 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:36:51.690990 | orchestrator | 2025-10-08 15:36:51.691000 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-10-08 15:36:51.691011 | orchestrator | Wednesday 08 October 2025 15:36:49 +0000 (0:00:00.147) 0:00:21.014 ***** 2025-10-08 15:36:51.691023 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:36:51.691055 | orchestrator | 2025-10-08 15:36:51.691066 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-10-08 15:36:51.691077 | orchestrator | Wednesday 08 October 2025 15:36:49 +0000 (0:00:00.134) 0:00:21.149 ***** 2025-10-08 15:36:51.691088 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:36:51.691099 | orchestrator | 2025-10-08 15:36:51.691111 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-10-08 15:36:51.691142 | orchestrator | Wednesday 08 October 2025 15:36:49 +0000 (0:00:00.131) 0:00:21.281 ***** 2025-10-08 15:36:51.691154 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:36:51.691165 | orchestrator | 2025-10-08 15:36:51.691176 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-10-08 15:36:51.691187 | orchestrator | Wednesday 08 October 2025 15:36:49 +0000 (0:00:00.146) 0:00:21.427 ***** 2025-10-08 15:36:51.691198 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:36:51.691209 | orchestrator | 2025-10-08 15:36:51.691219 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-10-08 15:36:51.691230 | orchestrator | Wednesday 08 October 2025 15:36:49 +0000 (0:00:00.147) 0:00:21.575 ***** 2025-10-08 15:36:51.691241 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:36:51.691252 | orchestrator | 2025-10-08 15:36:51.691263 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-10-08 15:36:51.691274 | orchestrator | Wednesday 08 October 2025 15:36:50 +0000 (0:00:00.128) 0:00:21.703 ***** 2025-10-08 15:36:51.691285 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:36:51.691296 | orchestrator | 2025-10-08 15:36:51.691316 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-10-08 15:36:51.691327 | orchestrator | Wednesday 08 October 2025 15:36:50 +0000 (0:00:00.128) 0:00:21.831 ***** 2025-10-08 15:36:51.691338 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:36:51.691349 | orchestrator | 2025-10-08 15:36:51.691360 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-10-08 15:36:51.691371 | orchestrator | Wednesday 08 October 2025 15:36:50 +0000 (0:00:00.143) 0:00:21.975 ***** 2025-10-08 15:36:51.691382 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:36:51.691393 | orchestrator | 2025-10-08 15:36:51.691404 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-10-08 15:36:51.691415 | orchestrator | Wednesday 08 October 2025 15:36:50 +0000 (0:00:00.148) 0:00:22.123 ***** 2025-10-08 15:36:51.691427 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-25f30e7b-7b9e-5d46-b3fc-d4cb59f24626', 'data_vg': 'ceph-25f30e7b-7b9e-5d46-b3fc-d4cb59f24626'})  2025-10-08 15:36:51.691441 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ff85ad2a-1d5d-50f9-b3a7-2f1eee54f485', 'data_vg': 'ceph-ff85ad2a-1d5d-50f9-b3a7-2f1eee54f485'})  2025-10-08 15:36:51.691452 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:36:51.691463 | orchestrator | 2025-10-08 15:36:51.691474 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-10-08 15:36:51.691485 | orchestrator | Wednesday 08 October 2025 15:36:50 +0000 (0:00:00.375) 0:00:22.498 ***** 2025-10-08 15:36:51.691496 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-25f30e7b-7b9e-5d46-b3fc-d4cb59f24626', 'data_vg': 'ceph-25f30e7b-7b9e-5d46-b3fc-d4cb59f24626'})  2025-10-08 15:36:51.691507 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ff85ad2a-1d5d-50f9-b3a7-2f1eee54f485', 'data_vg': 'ceph-ff85ad2a-1d5d-50f9-b3a7-2f1eee54f485'})  2025-10-08 15:36:51.691518 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:36:51.691529 | orchestrator | 2025-10-08 15:36:51.691540 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-10-08 15:36:51.691551 | orchestrator | Wednesday 08 October 2025 15:36:51 +0000 (0:00:00.158) 0:00:22.657 ***** 2025-10-08 15:36:51.691567 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-25f30e7b-7b9e-5d46-b3fc-d4cb59f24626', 'data_vg': 'ceph-25f30e7b-7b9e-5d46-b3fc-d4cb59f24626'})  2025-10-08 15:36:51.691579 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ff85ad2a-1d5d-50f9-b3a7-2f1eee54f485', 'data_vg': 'ceph-ff85ad2a-1d5d-50f9-b3a7-2f1eee54f485'})  2025-10-08 15:36:51.691590 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:36:51.691601 | orchestrator | 2025-10-08 15:36:51.691611 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-10-08 15:36:51.691622 | orchestrator | Wednesday 08 October 2025 15:36:51 +0000 (0:00:00.171) 0:00:22.828 ***** 2025-10-08 15:36:51.691633 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-25f30e7b-7b9e-5d46-b3fc-d4cb59f24626', 'data_vg': 'ceph-25f30e7b-7b9e-5d46-b3fc-d4cb59f24626'})  2025-10-08 15:36:51.691644 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ff85ad2a-1d5d-50f9-b3a7-2f1eee54f485', 'data_vg': 'ceph-ff85ad2a-1d5d-50f9-b3a7-2f1eee54f485'})  2025-10-08 15:36:51.691655 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:36:51.691666 | orchestrator | 2025-10-08 15:36:51.691677 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-10-08 15:36:51.691688 | orchestrator | Wednesday 08 October 2025 15:36:51 +0000 (0:00:00.161) 0:00:22.990 ***** 2025-10-08 15:36:51.691699 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-25f30e7b-7b9e-5d46-b3fc-d4cb59f24626', 'data_vg': 'ceph-25f30e7b-7b9e-5d46-b3fc-d4cb59f24626'})  2025-10-08 15:36:51.691710 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ff85ad2a-1d5d-50f9-b3a7-2f1eee54f485', 'data_vg': 'ceph-ff85ad2a-1d5d-50f9-b3a7-2f1eee54f485'})  2025-10-08 15:36:51.691721 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:36:51.691738 | orchestrator | 2025-10-08 15:36:51.691749 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-10-08 15:36:51.691760 | orchestrator | Wednesday 08 October 2025 15:36:51 +0000 (0:00:00.157) 0:00:23.148 ***** 2025-10-08 15:36:51.691771 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-25f30e7b-7b9e-5d46-b3fc-d4cb59f24626', 'data_vg': 'ceph-25f30e7b-7b9e-5d46-b3fc-d4cb59f24626'})  2025-10-08 15:36:51.691788 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ff85ad2a-1d5d-50f9-b3a7-2f1eee54f485', 'data_vg': 'ceph-ff85ad2a-1d5d-50f9-b3a7-2f1eee54f485'})  2025-10-08 15:36:57.136190 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:36:57.136297 | orchestrator | 2025-10-08 15:36:57.136314 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-10-08 15:36:57.136328 | orchestrator | Wednesday 08 October 2025 15:36:51 +0000 (0:00:00.166) 0:00:23.315 ***** 2025-10-08 15:36:57.136339 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-25f30e7b-7b9e-5d46-b3fc-d4cb59f24626', 'data_vg': 'ceph-25f30e7b-7b9e-5d46-b3fc-d4cb59f24626'})  2025-10-08 15:36:57.136352 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ff85ad2a-1d5d-50f9-b3a7-2f1eee54f485', 'data_vg': 'ceph-ff85ad2a-1d5d-50f9-b3a7-2f1eee54f485'})  2025-10-08 15:36:57.136364 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:36:57.136375 | orchestrator | 2025-10-08 15:36:57.136386 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-10-08 15:36:57.136397 | orchestrator | Wednesday 08 October 2025 15:36:51 +0000 (0:00:00.166) 0:00:23.481 ***** 2025-10-08 15:36:57.136408 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-25f30e7b-7b9e-5d46-b3fc-d4cb59f24626', 'data_vg': 'ceph-25f30e7b-7b9e-5d46-b3fc-d4cb59f24626'})  2025-10-08 15:36:57.136419 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ff85ad2a-1d5d-50f9-b3a7-2f1eee54f485', 'data_vg': 'ceph-ff85ad2a-1d5d-50f9-b3a7-2f1eee54f485'})  2025-10-08 15:36:57.136430 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:36:57.136441 | orchestrator | 2025-10-08 15:36:57.136452 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-10-08 15:36:57.136463 | orchestrator | Wednesday 08 October 2025 15:36:51 +0000 (0:00:00.148) 0:00:23.630 ***** 2025-10-08 15:36:57.136474 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:36:57.136486 | orchestrator | 2025-10-08 15:36:57.136497 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-10-08 15:36:57.136507 | orchestrator | Wednesday 08 October 2025 15:36:52 +0000 (0:00:00.525) 0:00:24.156 ***** 2025-10-08 15:36:57.136518 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:36:57.136528 | orchestrator | 2025-10-08 15:36:57.136539 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-10-08 15:36:57.136550 | orchestrator | Wednesday 08 October 2025 15:36:53 +0000 (0:00:00.521) 0:00:24.677 ***** 2025-10-08 15:36:57.136561 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:36:57.136571 | orchestrator | 2025-10-08 15:36:57.136582 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-10-08 15:36:57.136593 | orchestrator | Wednesday 08 October 2025 15:36:53 +0000 (0:00:00.150) 0:00:24.828 ***** 2025-10-08 15:36:57.136604 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-25f30e7b-7b9e-5d46-b3fc-d4cb59f24626', 'vg_name': 'ceph-25f30e7b-7b9e-5d46-b3fc-d4cb59f24626'}) 2025-10-08 15:36:57.136616 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-ff85ad2a-1d5d-50f9-b3a7-2f1eee54f485', 'vg_name': 'ceph-ff85ad2a-1d5d-50f9-b3a7-2f1eee54f485'}) 2025-10-08 15:36:57.136626 | orchestrator | 2025-10-08 15:36:57.136638 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-10-08 15:36:57.136648 | orchestrator | Wednesday 08 October 2025 15:36:53 +0000 (0:00:00.179) 0:00:25.008 ***** 2025-10-08 15:36:57.136659 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-25f30e7b-7b9e-5d46-b3fc-d4cb59f24626', 'data_vg': 'ceph-25f30e7b-7b9e-5d46-b3fc-d4cb59f24626'})  2025-10-08 15:36:57.136695 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ff85ad2a-1d5d-50f9-b3a7-2f1eee54f485', 'data_vg': 'ceph-ff85ad2a-1d5d-50f9-b3a7-2f1eee54f485'})  2025-10-08 15:36:57.136707 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:36:57.136718 | orchestrator | 2025-10-08 15:36:57.136731 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-10-08 15:36:57.136743 | orchestrator | Wednesday 08 October 2025 15:36:53 +0000 (0:00:00.390) 0:00:25.398 ***** 2025-10-08 15:36:57.136755 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-25f30e7b-7b9e-5d46-b3fc-d4cb59f24626', 'data_vg': 'ceph-25f30e7b-7b9e-5d46-b3fc-d4cb59f24626'})  2025-10-08 15:36:57.136768 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ff85ad2a-1d5d-50f9-b3a7-2f1eee54f485', 'data_vg': 'ceph-ff85ad2a-1d5d-50f9-b3a7-2f1eee54f485'})  2025-10-08 15:36:57.136780 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:36:57.136793 | orchestrator | 2025-10-08 15:36:57.136805 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-10-08 15:36:57.136817 | orchestrator | Wednesday 08 October 2025 15:36:53 +0000 (0:00:00.162) 0:00:25.560 ***** 2025-10-08 15:36:57.136829 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-25f30e7b-7b9e-5d46-b3fc-d4cb59f24626', 'data_vg': 'ceph-25f30e7b-7b9e-5d46-b3fc-d4cb59f24626'})  2025-10-08 15:36:57.136842 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ff85ad2a-1d5d-50f9-b3a7-2f1eee54f485', 'data_vg': 'ceph-ff85ad2a-1d5d-50f9-b3a7-2f1eee54f485'})  2025-10-08 15:36:57.136854 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:36:57.136866 | orchestrator | 2025-10-08 15:36:57.136878 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-10-08 15:36:57.136890 | orchestrator | Wednesday 08 October 2025 15:36:54 +0000 (0:00:00.157) 0:00:25.717 ***** 2025-10-08 15:36:57.136901 | orchestrator | ok: [testbed-node-3] => { 2025-10-08 15:36:57.136914 | orchestrator |  "lvm_report": { 2025-10-08 15:36:57.136926 | orchestrator |  "lv": [ 2025-10-08 15:36:57.136937 | orchestrator |  { 2025-10-08 15:36:57.136968 | orchestrator |  "lv_name": "osd-block-25f30e7b-7b9e-5d46-b3fc-d4cb59f24626", 2025-10-08 15:36:57.136981 | orchestrator |  "vg_name": "ceph-25f30e7b-7b9e-5d46-b3fc-d4cb59f24626" 2025-10-08 15:36:57.136993 | orchestrator |  }, 2025-10-08 15:36:57.137005 | orchestrator |  { 2025-10-08 15:36:57.137017 | orchestrator |  "lv_name": "osd-block-ff85ad2a-1d5d-50f9-b3a7-2f1eee54f485", 2025-10-08 15:36:57.137049 | orchestrator |  "vg_name": "ceph-ff85ad2a-1d5d-50f9-b3a7-2f1eee54f485" 2025-10-08 15:36:57.137061 | orchestrator |  } 2025-10-08 15:36:57.137073 | orchestrator |  ], 2025-10-08 15:36:57.137084 | orchestrator |  "pv": [ 2025-10-08 15:36:57.137095 | orchestrator |  { 2025-10-08 15:36:57.137106 | orchestrator |  "pv_name": "/dev/sdb", 2025-10-08 15:36:57.137117 | orchestrator |  "vg_name": "ceph-25f30e7b-7b9e-5d46-b3fc-d4cb59f24626" 2025-10-08 15:36:57.137128 | orchestrator |  }, 2025-10-08 15:36:57.137138 | orchestrator |  { 2025-10-08 15:36:57.137149 | orchestrator |  "pv_name": "/dev/sdc", 2025-10-08 15:36:57.137160 | orchestrator |  "vg_name": "ceph-ff85ad2a-1d5d-50f9-b3a7-2f1eee54f485" 2025-10-08 15:36:57.137171 | orchestrator |  } 2025-10-08 15:36:57.137182 | orchestrator |  ] 2025-10-08 15:36:57.137193 | orchestrator |  } 2025-10-08 15:36:57.137204 | orchestrator | } 2025-10-08 15:36:57.137215 | orchestrator | 2025-10-08 15:36:57.137226 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-10-08 15:36:57.137237 | orchestrator | 2025-10-08 15:36:57.137248 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-10-08 15:36:57.137259 | orchestrator | Wednesday 08 October 2025 15:36:54 +0000 (0:00:00.294) 0:00:26.012 ***** 2025-10-08 15:36:57.137270 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-10-08 15:36:57.137300 | orchestrator | 2025-10-08 15:36:57.137311 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-10-08 15:36:57.137322 | orchestrator | Wednesday 08 October 2025 15:36:54 +0000 (0:00:00.295) 0:00:26.308 ***** 2025-10-08 15:36:57.137333 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:36:57.137344 | orchestrator | 2025-10-08 15:36:57.137355 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-08 15:36:57.137366 | orchestrator | Wednesday 08 October 2025 15:36:54 +0000 (0:00:00.221) 0:00:26.529 ***** 2025-10-08 15:36:57.137394 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-10-08 15:36:57.137405 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-10-08 15:36:57.137416 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-10-08 15:36:57.137427 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-10-08 15:36:57.137438 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-10-08 15:36:57.137449 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-10-08 15:36:57.137460 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-10-08 15:36:57.137476 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-10-08 15:36:57.137487 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-10-08 15:36:57.137498 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-10-08 15:36:57.137509 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-10-08 15:36:57.137520 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-10-08 15:36:57.137530 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-10-08 15:36:57.137541 | orchestrator | 2025-10-08 15:36:57.137552 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-08 15:36:57.137563 | orchestrator | Wednesday 08 October 2025 15:36:55 +0000 (0:00:00.407) 0:00:26.937 ***** 2025-10-08 15:36:57.137574 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:36:57.137584 | orchestrator | 2025-10-08 15:36:57.137595 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-08 15:36:57.137606 | orchestrator | Wednesday 08 October 2025 15:36:55 +0000 (0:00:00.188) 0:00:27.126 ***** 2025-10-08 15:36:57.137617 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:36:57.137628 | orchestrator | 2025-10-08 15:36:57.137639 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-08 15:36:57.137649 | orchestrator | Wednesday 08 October 2025 15:36:55 +0000 (0:00:00.179) 0:00:27.305 ***** 2025-10-08 15:36:57.137660 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:36:57.137671 | orchestrator | 2025-10-08 15:36:57.137682 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-08 15:36:57.137693 | orchestrator | Wednesday 08 October 2025 15:36:56 +0000 (0:00:00.605) 0:00:27.911 ***** 2025-10-08 15:36:57.137704 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:36:57.137714 | orchestrator | 2025-10-08 15:36:57.137725 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-08 15:36:57.137736 | orchestrator | Wednesday 08 October 2025 15:36:56 +0000 (0:00:00.218) 0:00:28.130 ***** 2025-10-08 15:36:57.137747 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:36:57.137758 | orchestrator | 2025-10-08 15:36:57.137768 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-08 15:36:57.137779 | orchestrator | Wednesday 08 October 2025 15:36:56 +0000 (0:00:00.211) 0:00:28.342 ***** 2025-10-08 15:36:57.137790 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:36:57.137801 | orchestrator | 2025-10-08 15:36:57.137818 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-08 15:36:57.137829 | orchestrator | Wednesday 08 October 2025 15:36:56 +0000 (0:00:00.210) 0:00:28.552 ***** 2025-10-08 15:36:57.137840 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:36:57.137852 | orchestrator | 2025-10-08 15:36:57.137870 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-08 15:37:07.663457 | orchestrator | Wednesday 08 October 2025 15:36:57 +0000 (0:00:00.208) 0:00:28.760 ***** 2025-10-08 15:37:07.663560 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:37:07.663576 | orchestrator | 2025-10-08 15:37:07.663589 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-08 15:37:07.663600 | orchestrator | Wednesday 08 October 2025 15:36:57 +0000 (0:00:00.233) 0:00:28.995 ***** 2025-10-08 15:37:07.663612 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_a79e8257-b31c-4b26-9d3c-62ccc1082da3) 2025-10-08 15:37:07.663625 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_a79e8257-b31c-4b26-9d3c-62ccc1082da3) 2025-10-08 15:37:07.663636 | orchestrator | 2025-10-08 15:37:07.663647 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-08 15:37:07.663658 | orchestrator | Wednesday 08 October 2025 15:36:57 +0000 (0:00:00.417) 0:00:29.412 ***** 2025-10-08 15:37:07.663669 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_8931d93f-304b-4b68-94eb-87cca6c6eade) 2025-10-08 15:37:07.663680 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_8931d93f-304b-4b68-94eb-87cca6c6eade) 2025-10-08 15:37:07.663692 | orchestrator | 2025-10-08 15:37:07.663702 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-08 15:37:07.663713 | orchestrator | Wednesday 08 October 2025 15:36:58 +0000 (0:00:00.399) 0:00:29.811 ***** 2025-10-08 15:37:07.663724 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_f279d016-8061-4f1a-b5de-972c25793956) 2025-10-08 15:37:07.663735 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_f279d016-8061-4f1a-b5de-972c25793956) 2025-10-08 15:37:07.663746 | orchestrator | 2025-10-08 15:37:07.663757 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-08 15:37:07.663768 | orchestrator | Wednesday 08 October 2025 15:36:58 +0000 (0:00:00.343) 0:00:30.155 ***** 2025-10-08 15:37:07.663779 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_5c2995ee-2a0f-4f5c-ac7c-066cefbff021) 2025-10-08 15:37:07.663790 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_5c2995ee-2a0f-4f5c-ac7c-066cefbff021) 2025-10-08 15:37:07.663801 | orchestrator | 2025-10-08 15:37:07.663812 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-08 15:37:07.663823 | orchestrator | Wednesday 08 October 2025 15:36:59 +0000 (0:00:00.538) 0:00:30.693 ***** 2025-10-08 15:37:07.663834 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-10-08 15:37:07.663845 | orchestrator | 2025-10-08 15:37:07.663856 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-08 15:37:07.663867 | orchestrator | Wednesday 08 October 2025 15:36:59 +0000 (0:00:00.473) 0:00:31.167 ***** 2025-10-08 15:37:07.663878 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-10-08 15:37:07.663905 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-10-08 15:37:07.663916 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-10-08 15:37:07.663927 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-10-08 15:37:07.663938 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-10-08 15:37:07.663950 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-10-08 15:37:07.663963 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-10-08 15:37:07.663998 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-10-08 15:37:07.664011 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-10-08 15:37:07.664053 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-10-08 15:37:07.664068 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-10-08 15:37:07.664080 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-10-08 15:37:07.664092 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-10-08 15:37:07.664104 | orchestrator | 2025-10-08 15:37:07.664117 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-08 15:37:07.664130 | orchestrator | Wednesday 08 October 2025 15:37:00 +0000 (0:00:00.506) 0:00:31.673 ***** 2025-10-08 15:37:07.664142 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:37:07.664155 | orchestrator | 2025-10-08 15:37:07.664167 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-08 15:37:07.664179 | orchestrator | Wednesday 08 October 2025 15:37:00 +0000 (0:00:00.179) 0:00:31.852 ***** 2025-10-08 15:37:07.664192 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:37:07.664204 | orchestrator | 2025-10-08 15:37:07.664216 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-08 15:37:07.664228 | orchestrator | Wednesday 08 October 2025 15:37:00 +0000 (0:00:00.176) 0:00:32.029 ***** 2025-10-08 15:37:07.664240 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:37:07.664253 | orchestrator | 2025-10-08 15:37:07.664265 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-08 15:37:07.664277 | orchestrator | Wednesday 08 October 2025 15:37:00 +0000 (0:00:00.205) 0:00:32.235 ***** 2025-10-08 15:37:07.664290 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:37:07.664302 | orchestrator | 2025-10-08 15:37:07.664332 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-08 15:37:07.664344 | orchestrator | Wednesday 08 October 2025 15:37:00 +0000 (0:00:00.208) 0:00:32.443 ***** 2025-10-08 15:37:07.664355 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:37:07.664366 | orchestrator | 2025-10-08 15:37:07.664377 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-08 15:37:07.664387 | orchestrator | Wednesday 08 October 2025 15:37:00 +0000 (0:00:00.188) 0:00:32.631 ***** 2025-10-08 15:37:07.664398 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:37:07.664409 | orchestrator | 2025-10-08 15:37:07.664420 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-08 15:37:07.664431 | orchestrator | Wednesday 08 October 2025 15:37:01 +0000 (0:00:00.189) 0:00:32.821 ***** 2025-10-08 15:37:07.664442 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:37:07.664452 | orchestrator | 2025-10-08 15:37:07.664463 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-08 15:37:07.664474 | orchestrator | Wednesday 08 October 2025 15:37:01 +0000 (0:00:00.202) 0:00:33.023 ***** 2025-10-08 15:37:07.664484 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:37:07.664495 | orchestrator | 2025-10-08 15:37:07.664506 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-08 15:37:07.664517 | orchestrator | Wednesday 08 October 2025 15:37:01 +0000 (0:00:00.259) 0:00:33.283 ***** 2025-10-08 15:37:07.664528 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-10-08 15:37:07.664539 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-10-08 15:37:07.664550 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-10-08 15:37:07.664561 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-10-08 15:37:07.664572 | orchestrator | 2025-10-08 15:37:07.664583 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-08 15:37:07.664594 | orchestrator | Wednesday 08 October 2025 15:37:02 +0000 (0:00:00.912) 0:00:34.196 ***** 2025-10-08 15:37:07.664614 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:37:07.664625 | orchestrator | 2025-10-08 15:37:07.664636 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-08 15:37:07.664647 | orchestrator | Wednesday 08 October 2025 15:37:02 +0000 (0:00:00.194) 0:00:34.391 ***** 2025-10-08 15:37:07.664658 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:37:07.664668 | orchestrator | 2025-10-08 15:37:07.664679 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-08 15:37:07.664690 | orchestrator | Wednesday 08 October 2025 15:37:03 +0000 (0:00:00.721) 0:00:35.112 ***** 2025-10-08 15:37:07.664700 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:37:07.664711 | orchestrator | 2025-10-08 15:37:07.664721 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-08 15:37:07.664732 | orchestrator | Wednesday 08 October 2025 15:37:03 +0000 (0:00:00.206) 0:00:35.319 ***** 2025-10-08 15:37:07.664743 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:37:07.664753 | orchestrator | 2025-10-08 15:37:07.664764 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-10-08 15:37:07.664775 | orchestrator | Wednesday 08 October 2025 15:37:03 +0000 (0:00:00.220) 0:00:35.539 ***** 2025-10-08 15:37:07.664786 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:37:07.664796 | orchestrator | 2025-10-08 15:37:07.664807 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-10-08 15:37:07.664818 | orchestrator | Wednesday 08 October 2025 15:37:04 +0000 (0:00:00.152) 0:00:35.692 ***** 2025-10-08 15:37:07.664829 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '7ac75f6e-526f-52f0-b624-7532d6099aef'}}) 2025-10-08 15:37:07.664840 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'bafbc9f1-844e-58d3-a294-acb7fdea1516'}}) 2025-10-08 15:37:07.664850 | orchestrator | 2025-10-08 15:37:07.664861 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-10-08 15:37:07.664872 | orchestrator | Wednesday 08 October 2025 15:37:04 +0000 (0:00:00.190) 0:00:35.883 ***** 2025-10-08 15:37:07.664884 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-7ac75f6e-526f-52f0-b624-7532d6099aef', 'data_vg': 'ceph-7ac75f6e-526f-52f0-b624-7532d6099aef'}) 2025-10-08 15:37:07.664896 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-bafbc9f1-844e-58d3-a294-acb7fdea1516', 'data_vg': 'ceph-bafbc9f1-844e-58d3-a294-acb7fdea1516'}) 2025-10-08 15:37:07.664907 | orchestrator | 2025-10-08 15:37:07.664917 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-10-08 15:37:07.664928 | orchestrator | Wednesday 08 October 2025 15:37:06 +0000 (0:00:01.847) 0:00:37.731 ***** 2025-10-08 15:37:07.664939 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7ac75f6e-526f-52f0-b624-7532d6099aef', 'data_vg': 'ceph-7ac75f6e-526f-52f0-b624-7532d6099aef'})  2025-10-08 15:37:07.664951 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bafbc9f1-844e-58d3-a294-acb7fdea1516', 'data_vg': 'ceph-bafbc9f1-844e-58d3-a294-acb7fdea1516'})  2025-10-08 15:37:07.664962 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:37:07.664973 | orchestrator | 2025-10-08 15:37:07.664984 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-10-08 15:37:07.664994 | orchestrator | Wednesday 08 October 2025 15:37:06 +0000 (0:00:00.160) 0:00:37.892 ***** 2025-10-08 15:37:07.665005 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-7ac75f6e-526f-52f0-b624-7532d6099aef', 'data_vg': 'ceph-7ac75f6e-526f-52f0-b624-7532d6099aef'}) 2025-10-08 15:37:07.665016 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-bafbc9f1-844e-58d3-a294-acb7fdea1516', 'data_vg': 'ceph-bafbc9f1-844e-58d3-a294-acb7fdea1516'}) 2025-10-08 15:37:07.665051 | orchestrator | 2025-10-08 15:37:07.665081 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-10-08 15:37:13.281937 | orchestrator | Wednesday 08 October 2025 15:37:07 +0000 (0:00:01.393) 0:00:39.285 ***** 2025-10-08 15:37:13.282171 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7ac75f6e-526f-52f0-b624-7532d6099aef', 'data_vg': 'ceph-7ac75f6e-526f-52f0-b624-7532d6099aef'})  2025-10-08 15:37:13.282194 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bafbc9f1-844e-58d3-a294-acb7fdea1516', 'data_vg': 'ceph-bafbc9f1-844e-58d3-a294-acb7fdea1516'})  2025-10-08 15:37:13.282207 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:37:13.282220 | orchestrator | 2025-10-08 15:37:13.282231 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-10-08 15:37:13.282242 | orchestrator | Wednesday 08 October 2025 15:37:07 +0000 (0:00:00.171) 0:00:39.457 ***** 2025-10-08 15:37:13.282253 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:37:13.282264 | orchestrator | 2025-10-08 15:37:13.282275 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-10-08 15:37:13.282286 | orchestrator | Wednesday 08 October 2025 15:37:07 +0000 (0:00:00.145) 0:00:39.603 ***** 2025-10-08 15:37:13.282297 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7ac75f6e-526f-52f0-b624-7532d6099aef', 'data_vg': 'ceph-7ac75f6e-526f-52f0-b624-7532d6099aef'})  2025-10-08 15:37:13.282325 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bafbc9f1-844e-58d3-a294-acb7fdea1516', 'data_vg': 'ceph-bafbc9f1-844e-58d3-a294-acb7fdea1516'})  2025-10-08 15:37:13.282336 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:37:13.282347 | orchestrator | 2025-10-08 15:37:13.282358 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-10-08 15:37:13.282369 | orchestrator | Wednesday 08 October 2025 15:37:08 +0000 (0:00:00.156) 0:00:39.760 ***** 2025-10-08 15:37:13.282379 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:37:13.282390 | orchestrator | 2025-10-08 15:37:13.282401 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-10-08 15:37:13.282411 | orchestrator | Wednesday 08 October 2025 15:37:08 +0000 (0:00:00.145) 0:00:39.905 ***** 2025-10-08 15:37:13.282422 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7ac75f6e-526f-52f0-b624-7532d6099aef', 'data_vg': 'ceph-7ac75f6e-526f-52f0-b624-7532d6099aef'})  2025-10-08 15:37:13.282433 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bafbc9f1-844e-58d3-a294-acb7fdea1516', 'data_vg': 'ceph-bafbc9f1-844e-58d3-a294-acb7fdea1516'})  2025-10-08 15:37:13.282444 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:37:13.282455 | orchestrator | 2025-10-08 15:37:13.282466 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-10-08 15:37:13.282477 | orchestrator | Wednesday 08 October 2025 15:37:08 +0000 (0:00:00.372) 0:00:40.278 ***** 2025-10-08 15:37:13.282492 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:37:13.282503 | orchestrator | 2025-10-08 15:37:13.282513 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-10-08 15:37:13.282524 | orchestrator | Wednesday 08 October 2025 15:37:08 +0000 (0:00:00.140) 0:00:40.418 ***** 2025-10-08 15:37:13.282535 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7ac75f6e-526f-52f0-b624-7532d6099aef', 'data_vg': 'ceph-7ac75f6e-526f-52f0-b624-7532d6099aef'})  2025-10-08 15:37:13.282546 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bafbc9f1-844e-58d3-a294-acb7fdea1516', 'data_vg': 'ceph-bafbc9f1-844e-58d3-a294-acb7fdea1516'})  2025-10-08 15:37:13.282557 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:37:13.282568 | orchestrator | 2025-10-08 15:37:13.282579 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-10-08 15:37:13.282589 | orchestrator | Wednesday 08 October 2025 15:37:08 +0000 (0:00:00.153) 0:00:40.572 ***** 2025-10-08 15:37:13.282600 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:37:13.282612 | orchestrator | 2025-10-08 15:37:13.282622 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-10-08 15:37:13.282633 | orchestrator | Wednesday 08 October 2025 15:37:09 +0000 (0:00:00.131) 0:00:40.704 ***** 2025-10-08 15:37:13.282652 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7ac75f6e-526f-52f0-b624-7532d6099aef', 'data_vg': 'ceph-7ac75f6e-526f-52f0-b624-7532d6099aef'})  2025-10-08 15:37:13.282664 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bafbc9f1-844e-58d3-a294-acb7fdea1516', 'data_vg': 'ceph-bafbc9f1-844e-58d3-a294-acb7fdea1516'})  2025-10-08 15:37:13.282674 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:37:13.282685 | orchestrator | 2025-10-08 15:37:13.282696 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-10-08 15:37:13.282707 | orchestrator | Wednesday 08 October 2025 15:37:09 +0000 (0:00:00.170) 0:00:40.874 ***** 2025-10-08 15:37:13.282717 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7ac75f6e-526f-52f0-b624-7532d6099aef', 'data_vg': 'ceph-7ac75f6e-526f-52f0-b624-7532d6099aef'})  2025-10-08 15:37:13.282728 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bafbc9f1-844e-58d3-a294-acb7fdea1516', 'data_vg': 'ceph-bafbc9f1-844e-58d3-a294-acb7fdea1516'})  2025-10-08 15:37:13.282739 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:37:13.282750 | orchestrator | 2025-10-08 15:37:13.282761 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-10-08 15:37:13.282772 | orchestrator | Wednesday 08 October 2025 15:37:09 +0000 (0:00:00.167) 0:00:41.041 ***** 2025-10-08 15:37:13.282801 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7ac75f6e-526f-52f0-b624-7532d6099aef', 'data_vg': 'ceph-7ac75f6e-526f-52f0-b624-7532d6099aef'})  2025-10-08 15:37:13.282813 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bafbc9f1-844e-58d3-a294-acb7fdea1516', 'data_vg': 'ceph-bafbc9f1-844e-58d3-a294-acb7fdea1516'})  2025-10-08 15:37:13.282824 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:37:13.282835 | orchestrator | 2025-10-08 15:37:13.282846 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-10-08 15:37:13.282857 | orchestrator | Wednesday 08 October 2025 15:37:09 +0000 (0:00:00.155) 0:00:41.197 ***** 2025-10-08 15:37:13.282867 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:37:13.282878 | orchestrator | 2025-10-08 15:37:13.282889 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-10-08 15:37:13.282899 | orchestrator | Wednesday 08 October 2025 15:37:09 +0000 (0:00:00.127) 0:00:41.324 ***** 2025-10-08 15:37:13.282910 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:37:13.282921 | orchestrator | 2025-10-08 15:37:13.282931 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-10-08 15:37:13.282942 | orchestrator | Wednesday 08 October 2025 15:37:09 +0000 (0:00:00.140) 0:00:41.465 ***** 2025-10-08 15:37:13.282953 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:37:13.282963 | orchestrator | 2025-10-08 15:37:13.282974 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-10-08 15:37:13.282984 | orchestrator | Wednesday 08 October 2025 15:37:09 +0000 (0:00:00.143) 0:00:41.608 ***** 2025-10-08 15:37:13.282995 | orchestrator | ok: [testbed-node-4] => { 2025-10-08 15:37:13.283006 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-10-08 15:37:13.283017 | orchestrator | } 2025-10-08 15:37:13.283057 | orchestrator | 2025-10-08 15:37:13.283069 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-10-08 15:37:13.283080 | orchestrator | Wednesday 08 October 2025 15:37:10 +0000 (0:00:00.156) 0:00:41.764 ***** 2025-10-08 15:37:13.283090 | orchestrator | ok: [testbed-node-4] => { 2025-10-08 15:37:13.283101 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-10-08 15:37:13.283112 | orchestrator | } 2025-10-08 15:37:13.283123 | orchestrator | 2025-10-08 15:37:13.283133 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-10-08 15:37:13.283144 | orchestrator | Wednesday 08 October 2025 15:37:10 +0000 (0:00:00.143) 0:00:41.908 ***** 2025-10-08 15:37:13.283155 | orchestrator | ok: [testbed-node-4] => { 2025-10-08 15:37:13.283165 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-10-08 15:37:13.283184 | orchestrator | } 2025-10-08 15:37:13.283194 | orchestrator | 2025-10-08 15:37:13.283205 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-10-08 15:37:13.283216 | orchestrator | Wednesday 08 October 2025 15:37:10 +0000 (0:00:00.344) 0:00:42.253 ***** 2025-10-08 15:37:13.283226 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:37:13.283237 | orchestrator | 2025-10-08 15:37:13.283248 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-10-08 15:37:13.283258 | orchestrator | Wednesday 08 October 2025 15:37:11 +0000 (0:00:00.515) 0:00:42.768 ***** 2025-10-08 15:37:13.283274 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:37:13.283285 | orchestrator | 2025-10-08 15:37:13.283296 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-10-08 15:37:13.283307 | orchestrator | Wednesday 08 October 2025 15:37:11 +0000 (0:00:00.508) 0:00:43.277 ***** 2025-10-08 15:37:13.283318 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:37:13.283328 | orchestrator | 2025-10-08 15:37:13.283339 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-10-08 15:37:13.283350 | orchestrator | Wednesday 08 October 2025 15:37:12 +0000 (0:00:00.512) 0:00:43.790 ***** 2025-10-08 15:37:13.283360 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:37:13.283371 | orchestrator | 2025-10-08 15:37:13.283382 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-10-08 15:37:13.283393 | orchestrator | Wednesday 08 October 2025 15:37:12 +0000 (0:00:00.148) 0:00:43.938 ***** 2025-10-08 15:37:13.283403 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:37:13.283414 | orchestrator | 2025-10-08 15:37:13.283425 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-10-08 15:37:13.283436 | orchestrator | Wednesday 08 October 2025 15:37:12 +0000 (0:00:00.115) 0:00:44.054 ***** 2025-10-08 15:37:13.283447 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:37:13.283457 | orchestrator | 2025-10-08 15:37:13.283468 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-10-08 15:37:13.283479 | orchestrator | Wednesday 08 October 2025 15:37:12 +0000 (0:00:00.126) 0:00:44.181 ***** 2025-10-08 15:37:13.283489 | orchestrator | ok: [testbed-node-4] => { 2025-10-08 15:37:13.283500 | orchestrator |  "vgs_report": { 2025-10-08 15:37:13.283511 | orchestrator |  "vg": [] 2025-10-08 15:37:13.283522 | orchestrator |  } 2025-10-08 15:37:13.283532 | orchestrator | } 2025-10-08 15:37:13.283543 | orchestrator | 2025-10-08 15:37:13.283554 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-10-08 15:37:13.283565 | orchestrator | Wednesday 08 October 2025 15:37:12 +0000 (0:00:00.152) 0:00:44.334 ***** 2025-10-08 15:37:13.283575 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:37:13.283586 | orchestrator | 2025-10-08 15:37:13.283597 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-10-08 15:37:13.283607 | orchestrator | Wednesday 08 October 2025 15:37:12 +0000 (0:00:00.140) 0:00:44.474 ***** 2025-10-08 15:37:13.283618 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:37:13.283629 | orchestrator | 2025-10-08 15:37:13.283640 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-10-08 15:37:13.283650 | orchestrator | Wednesday 08 October 2025 15:37:12 +0000 (0:00:00.140) 0:00:44.615 ***** 2025-10-08 15:37:13.283661 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:37:13.283672 | orchestrator | 2025-10-08 15:37:13.283682 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-10-08 15:37:13.283693 | orchestrator | Wednesday 08 October 2025 15:37:13 +0000 (0:00:00.145) 0:00:44.760 ***** 2025-10-08 15:37:13.283704 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:37:13.283715 | orchestrator | 2025-10-08 15:37:13.283726 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-10-08 15:37:13.283743 | orchestrator | Wednesday 08 October 2025 15:37:13 +0000 (0:00:00.143) 0:00:44.904 ***** 2025-10-08 15:37:18.134241 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:37:18.134358 | orchestrator | 2025-10-08 15:37:18.135124 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-10-08 15:37:18.135147 | orchestrator | Wednesday 08 October 2025 15:37:13 +0000 (0:00:00.342) 0:00:45.246 ***** 2025-10-08 15:37:18.135159 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:37:18.135170 | orchestrator | 2025-10-08 15:37:18.135181 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-10-08 15:37:18.135193 | orchestrator | Wednesday 08 October 2025 15:37:13 +0000 (0:00:00.137) 0:00:45.384 ***** 2025-10-08 15:37:18.135203 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:37:18.135214 | orchestrator | 2025-10-08 15:37:18.135225 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-10-08 15:37:18.135236 | orchestrator | Wednesday 08 October 2025 15:37:13 +0000 (0:00:00.145) 0:00:45.530 ***** 2025-10-08 15:37:18.135247 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:37:18.135258 | orchestrator | 2025-10-08 15:37:18.135268 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-10-08 15:37:18.135279 | orchestrator | Wednesday 08 October 2025 15:37:14 +0000 (0:00:00.146) 0:00:45.676 ***** 2025-10-08 15:37:18.135290 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:37:18.135301 | orchestrator | 2025-10-08 15:37:18.135312 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-10-08 15:37:18.135323 | orchestrator | Wednesday 08 October 2025 15:37:14 +0000 (0:00:00.144) 0:00:45.821 ***** 2025-10-08 15:37:18.135334 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:37:18.135344 | orchestrator | 2025-10-08 15:37:18.135355 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-10-08 15:37:18.135366 | orchestrator | Wednesday 08 October 2025 15:37:14 +0000 (0:00:00.142) 0:00:45.963 ***** 2025-10-08 15:37:18.135376 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:37:18.135387 | orchestrator | 2025-10-08 15:37:18.135398 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-10-08 15:37:18.135409 | orchestrator | Wednesday 08 October 2025 15:37:14 +0000 (0:00:00.128) 0:00:46.093 ***** 2025-10-08 15:37:18.135420 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:37:18.135430 | orchestrator | 2025-10-08 15:37:18.135441 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-10-08 15:37:18.135452 | orchestrator | Wednesday 08 October 2025 15:37:14 +0000 (0:00:00.133) 0:00:46.226 ***** 2025-10-08 15:37:18.135462 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:37:18.135473 | orchestrator | 2025-10-08 15:37:18.135484 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-10-08 15:37:18.135495 | orchestrator | Wednesday 08 October 2025 15:37:14 +0000 (0:00:00.136) 0:00:46.362 ***** 2025-10-08 15:37:18.135506 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:37:18.135516 | orchestrator | 2025-10-08 15:37:18.135527 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-10-08 15:37:18.135538 | orchestrator | Wednesday 08 October 2025 15:37:14 +0000 (0:00:00.138) 0:00:46.500 ***** 2025-10-08 15:37:18.135572 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7ac75f6e-526f-52f0-b624-7532d6099aef', 'data_vg': 'ceph-7ac75f6e-526f-52f0-b624-7532d6099aef'})  2025-10-08 15:37:18.135587 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bafbc9f1-844e-58d3-a294-acb7fdea1516', 'data_vg': 'ceph-bafbc9f1-844e-58d3-a294-acb7fdea1516'})  2025-10-08 15:37:18.135598 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:37:18.135609 | orchestrator | 2025-10-08 15:37:18.135620 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-10-08 15:37:18.135631 | orchestrator | Wednesday 08 October 2025 15:37:15 +0000 (0:00:00.172) 0:00:46.672 ***** 2025-10-08 15:37:18.135642 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7ac75f6e-526f-52f0-b624-7532d6099aef', 'data_vg': 'ceph-7ac75f6e-526f-52f0-b624-7532d6099aef'})  2025-10-08 15:37:18.135653 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bafbc9f1-844e-58d3-a294-acb7fdea1516', 'data_vg': 'ceph-bafbc9f1-844e-58d3-a294-acb7fdea1516'})  2025-10-08 15:37:18.135676 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:37:18.135687 | orchestrator | 2025-10-08 15:37:18.135697 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-10-08 15:37:18.135708 | orchestrator | Wednesday 08 October 2025 15:37:15 +0000 (0:00:00.164) 0:00:46.837 ***** 2025-10-08 15:37:18.135719 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7ac75f6e-526f-52f0-b624-7532d6099aef', 'data_vg': 'ceph-7ac75f6e-526f-52f0-b624-7532d6099aef'})  2025-10-08 15:37:18.135730 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bafbc9f1-844e-58d3-a294-acb7fdea1516', 'data_vg': 'ceph-bafbc9f1-844e-58d3-a294-acb7fdea1516'})  2025-10-08 15:37:18.135741 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:37:18.135752 | orchestrator | 2025-10-08 15:37:18.135763 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-10-08 15:37:18.135773 | orchestrator | Wednesday 08 October 2025 15:37:15 +0000 (0:00:00.201) 0:00:47.038 ***** 2025-10-08 15:37:18.135784 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7ac75f6e-526f-52f0-b624-7532d6099aef', 'data_vg': 'ceph-7ac75f6e-526f-52f0-b624-7532d6099aef'})  2025-10-08 15:37:18.135795 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bafbc9f1-844e-58d3-a294-acb7fdea1516', 'data_vg': 'ceph-bafbc9f1-844e-58d3-a294-acb7fdea1516'})  2025-10-08 15:37:18.135806 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:37:18.135817 | orchestrator | 2025-10-08 15:37:18.135828 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-10-08 15:37:18.135857 | orchestrator | Wednesday 08 October 2025 15:37:15 +0000 (0:00:00.371) 0:00:47.410 ***** 2025-10-08 15:37:18.135869 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7ac75f6e-526f-52f0-b624-7532d6099aef', 'data_vg': 'ceph-7ac75f6e-526f-52f0-b624-7532d6099aef'})  2025-10-08 15:37:18.135880 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bafbc9f1-844e-58d3-a294-acb7fdea1516', 'data_vg': 'ceph-bafbc9f1-844e-58d3-a294-acb7fdea1516'})  2025-10-08 15:37:18.135892 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:37:18.135902 | orchestrator | 2025-10-08 15:37:18.135914 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-10-08 15:37:18.135924 | orchestrator | Wednesday 08 October 2025 15:37:15 +0000 (0:00:00.161) 0:00:47.571 ***** 2025-10-08 15:37:18.135935 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7ac75f6e-526f-52f0-b624-7532d6099aef', 'data_vg': 'ceph-7ac75f6e-526f-52f0-b624-7532d6099aef'})  2025-10-08 15:37:18.135946 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bafbc9f1-844e-58d3-a294-acb7fdea1516', 'data_vg': 'ceph-bafbc9f1-844e-58d3-a294-acb7fdea1516'})  2025-10-08 15:37:18.135957 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:37:18.135969 | orchestrator | 2025-10-08 15:37:18.135980 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-10-08 15:37:18.135991 | orchestrator | Wednesday 08 October 2025 15:37:16 +0000 (0:00:00.155) 0:00:47.727 ***** 2025-10-08 15:37:18.136001 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7ac75f6e-526f-52f0-b624-7532d6099aef', 'data_vg': 'ceph-7ac75f6e-526f-52f0-b624-7532d6099aef'})  2025-10-08 15:37:18.136013 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bafbc9f1-844e-58d3-a294-acb7fdea1516', 'data_vg': 'ceph-bafbc9f1-844e-58d3-a294-acb7fdea1516'})  2025-10-08 15:37:18.136067 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:37:18.136081 | orchestrator | 2025-10-08 15:37:18.136092 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-10-08 15:37:18.136103 | orchestrator | Wednesday 08 October 2025 15:37:16 +0000 (0:00:00.158) 0:00:47.885 ***** 2025-10-08 15:37:18.136114 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7ac75f6e-526f-52f0-b624-7532d6099aef', 'data_vg': 'ceph-7ac75f6e-526f-52f0-b624-7532d6099aef'})  2025-10-08 15:37:18.136133 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bafbc9f1-844e-58d3-a294-acb7fdea1516', 'data_vg': 'ceph-bafbc9f1-844e-58d3-a294-acb7fdea1516'})  2025-10-08 15:37:18.136144 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:37:18.136155 | orchestrator | 2025-10-08 15:37:18.136166 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-10-08 15:37:18.136213 | orchestrator | Wednesday 08 October 2025 15:37:16 +0000 (0:00:00.166) 0:00:48.052 ***** 2025-10-08 15:37:18.136226 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:37:18.136237 | orchestrator | 2025-10-08 15:37:18.136248 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-10-08 15:37:18.136259 | orchestrator | Wednesday 08 October 2025 15:37:16 +0000 (0:00:00.514) 0:00:48.567 ***** 2025-10-08 15:37:18.136270 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:37:18.136280 | orchestrator | 2025-10-08 15:37:18.136291 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-10-08 15:37:18.136302 | orchestrator | Wednesday 08 October 2025 15:37:17 +0000 (0:00:00.514) 0:00:49.081 ***** 2025-10-08 15:37:18.136313 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:37:18.136324 | orchestrator | 2025-10-08 15:37:18.136334 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-10-08 15:37:18.136345 | orchestrator | Wednesday 08 October 2025 15:37:17 +0000 (0:00:00.168) 0:00:49.250 ***** 2025-10-08 15:37:18.136356 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-7ac75f6e-526f-52f0-b624-7532d6099aef', 'vg_name': 'ceph-7ac75f6e-526f-52f0-b624-7532d6099aef'}) 2025-10-08 15:37:18.136368 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-bafbc9f1-844e-58d3-a294-acb7fdea1516', 'vg_name': 'ceph-bafbc9f1-844e-58d3-a294-acb7fdea1516'}) 2025-10-08 15:37:18.136379 | orchestrator | 2025-10-08 15:37:18.136390 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-10-08 15:37:18.136401 | orchestrator | Wednesday 08 October 2025 15:37:17 +0000 (0:00:00.187) 0:00:49.437 ***** 2025-10-08 15:37:18.136412 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7ac75f6e-526f-52f0-b624-7532d6099aef', 'data_vg': 'ceph-7ac75f6e-526f-52f0-b624-7532d6099aef'})  2025-10-08 15:37:18.136423 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bafbc9f1-844e-58d3-a294-acb7fdea1516', 'data_vg': 'ceph-bafbc9f1-844e-58d3-a294-acb7fdea1516'})  2025-10-08 15:37:18.136434 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:37:18.136445 | orchestrator | 2025-10-08 15:37:18.136456 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-10-08 15:37:18.136467 | orchestrator | Wednesday 08 October 2025 15:37:17 +0000 (0:00:00.155) 0:00:49.593 ***** 2025-10-08 15:37:18.136478 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7ac75f6e-526f-52f0-b624-7532d6099aef', 'data_vg': 'ceph-7ac75f6e-526f-52f0-b624-7532d6099aef'})  2025-10-08 15:37:18.136489 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bafbc9f1-844e-58d3-a294-acb7fdea1516', 'data_vg': 'ceph-bafbc9f1-844e-58d3-a294-acb7fdea1516'})  2025-10-08 15:37:18.136508 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:37:24.317608 | orchestrator | 2025-10-08 15:37:24.317702 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-10-08 15:37:24.317716 | orchestrator | Wednesday 08 October 2025 15:37:18 +0000 (0:00:00.160) 0:00:49.753 ***** 2025-10-08 15:37:24.317728 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7ac75f6e-526f-52f0-b624-7532d6099aef', 'data_vg': 'ceph-7ac75f6e-526f-52f0-b624-7532d6099aef'})  2025-10-08 15:37:24.317740 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bafbc9f1-844e-58d3-a294-acb7fdea1516', 'data_vg': 'ceph-bafbc9f1-844e-58d3-a294-acb7fdea1516'})  2025-10-08 15:37:24.317750 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:37:24.317761 | orchestrator | 2025-10-08 15:37:24.317771 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-10-08 15:37:24.317781 | orchestrator | Wednesday 08 October 2025 15:37:18 +0000 (0:00:00.165) 0:00:49.918 ***** 2025-10-08 15:37:24.317812 | orchestrator | ok: [testbed-node-4] => { 2025-10-08 15:37:24.317823 | orchestrator |  "lvm_report": { 2025-10-08 15:37:24.317834 | orchestrator |  "lv": [ 2025-10-08 15:37:24.317843 | orchestrator |  { 2025-10-08 15:37:24.317853 | orchestrator |  "lv_name": "osd-block-7ac75f6e-526f-52f0-b624-7532d6099aef", 2025-10-08 15:37:24.317864 | orchestrator |  "vg_name": "ceph-7ac75f6e-526f-52f0-b624-7532d6099aef" 2025-10-08 15:37:24.317873 | orchestrator |  }, 2025-10-08 15:37:24.317883 | orchestrator |  { 2025-10-08 15:37:24.317892 | orchestrator |  "lv_name": "osd-block-bafbc9f1-844e-58d3-a294-acb7fdea1516", 2025-10-08 15:37:24.317902 | orchestrator |  "vg_name": "ceph-bafbc9f1-844e-58d3-a294-acb7fdea1516" 2025-10-08 15:37:24.317911 | orchestrator |  } 2025-10-08 15:37:24.317921 | orchestrator |  ], 2025-10-08 15:37:24.317931 | orchestrator |  "pv": [ 2025-10-08 15:37:24.317940 | orchestrator |  { 2025-10-08 15:37:24.317949 | orchestrator |  "pv_name": "/dev/sdb", 2025-10-08 15:37:24.317959 | orchestrator |  "vg_name": "ceph-7ac75f6e-526f-52f0-b624-7532d6099aef" 2025-10-08 15:37:24.317968 | orchestrator |  }, 2025-10-08 15:37:24.317978 | orchestrator |  { 2025-10-08 15:37:24.317988 | orchestrator |  "pv_name": "/dev/sdc", 2025-10-08 15:37:24.317997 | orchestrator |  "vg_name": "ceph-bafbc9f1-844e-58d3-a294-acb7fdea1516" 2025-10-08 15:37:24.318007 | orchestrator |  } 2025-10-08 15:37:24.318103 | orchestrator |  ] 2025-10-08 15:37:24.318116 | orchestrator |  } 2025-10-08 15:37:24.318126 | orchestrator | } 2025-10-08 15:37:24.318136 | orchestrator | 2025-10-08 15:37:24.318146 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-10-08 15:37:24.318156 | orchestrator | 2025-10-08 15:37:24.318168 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-10-08 15:37:24.318178 | orchestrator | Wednesday 08 October 2025 15:37:18 +0000 (0:00:00.489) 0:00:50.408 ***** 2025-10-08 15:37:24.318189 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-10-08 15:37:24.318200 | orchestrator | 2025-10-08 15:37:24.318223 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-10-08 15:37:24.318233 | orchestrator | Wednesday 08 October 2025 15:37:19 +0000 (0:00:00.261) 0:00:50.670 ***** 2025-10-08 15:37:24.318244 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:37:24.318255 | orchestrator | 2025-10-08 15:37:24.318266 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-08 15:37:24.318276 | orchestrator | Wednesday 08 October 2025 15:37:19 +0000 (0:00:00.234) 0:00:50.904 ***** 2025-10-08 15:37:24.318287 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-10-08 15:37:24.318298 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-10-08 15:37:24.318308 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-10-08 15:37:24.318319 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-10-08 15:37:24.318329 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-10-08 15:37:24.318340 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-10-08 15:37:24.318350 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-10-08 15:37:24.318361 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-10-08 15:37:24.318371 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-10-08 15:37:24.318382 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-10-08 15:37:24.318392 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-10-08 15:37:24.318413 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-10-08 15:37:24.318423 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-10-08 15:37:24.318434 | orchestrator | 2025-10-08 15:37:24.318444 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-08 15:37:24.318455 | orchestrator | Wednesday 08 October 2025 15:37:19 +0000 (0:00:00.426) 0:00:51.331 ***** 2025-10-08 15:37:24.318465 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:37:24.318481 | orchestrator | 2025-10-08 15:37:24.318492 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-08 15:37:24.318502 | orchestrator | Wednesday 08 October 2025 15:37:19 +0000 (0:00:00.221) 0:00:51.552 ***** 2025-10-08 15:37:24.318513 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:37:24.318524 | orchestrator | 2025-10-08 15:37:24.318533 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-08 15:37:24.318560 | orchestrator | Wednesday 08 October 2025 15:37:20 +0000 (0:00:00.216) 0:00:51.769 ***** 2025-10-08 15:37:24.318571 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:37:24.318580 | orchestrator | 2025-10-08 15:37:24.318590 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-08 15:37:24.318600 | orchestrator | Wednesday 08 October 2025 15:37:20 +0000 (0:00:00.195) 0:00:51.965 ***** 2025-10-08 15:37:24.318609 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:37:24.318619 | orchestrator | 2025-10-08 15:37:24.318628 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-08 15:37:24.318637 | orchestrator | Wednesday 08 October 2025 15:37:20 +0000 (0:00:00.196) 0:00:52.162 ***** 2025-10-08 15:37:24.318647 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:37:24.318657 | orchestrator | 2025-10-08 15:37:24.318666 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-08 15:37:24.318676 | orchestrator | Wednesday 08 October 2025 15:37:20 +0000 (0:00:00.206) 0:00:52.368 ***** 2025-10-08 15:37:24.318685 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:37:24.318695 | orchestrator | 2025-10-08 15:37:24.318704 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-08 15:37:24.318714 | orchestrator | Wednesday 08 October 2025 15:37:21 +0000 (0:00:00.659) 0:00:53.027 ***** 2025-10-08 15:37:24.318723 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:37:24.318732 | orchestrator | 2025-10-08 15:37:24.318742 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-08 15:37:24.318751 | orchestrator | Wednesday 08 October 2025 15:37:21 +0000 (0:00:00.201) 0:00:53.229 ***** 2025-10-08 15:37:24.318761 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:37:24.318770 | orchestrator | 2025-10-08 15:37:24.318779 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-08 15:37:24.318789 | orchestrator | Wednesday 08 October 2025 15:37:21 +0000 (0:00:00.213) 0:00:53.443 ***** 2025-10-08 15:37:24.318798 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_1cc5cf69-2914-42bf-9b1b-88c775b3ec52) 2025-10-08 15:37:24.318809 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_1cc5cf69-2914-42bf-9b1b-88c775b3ec52) 2025-10-08 15:37:24.318819 | orchestrator | 2025-10-08 15:37:24.318828 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-08 15:37:24.318838 | orchestrator | Wednesday 08 October 2025 15:37:22 +0000 (0:00:00.444) 0:00:53.887 ***** 2025-10-08 15:37:24.318847 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_96ffedcc-0414-421a-b44e-b183e9db41fd) 2025-10-08 15:37:24.318857 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_96ffedcc-0414-421a-b44e-b183e9db41fd) 2025-10-08 15:37:24.318866 | orchestrator | 2025-10-08 15:37:24.318876 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-08 15:37:24.318885 | orchestrator | Wednesday 08 October 2025 15:37:22 +0000 (0:00:00.410) 0:00:54.298 ***** 2025-10-08 15:37:24.318906 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_d1c70491-aba4-4ff7-8b88-cbd07cfcddb1) 2025-10-08 15:37:24.318915 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_d1c70491-aba4-4ff7-8b88-cbd07cfcddb1) 2025-10-08 15:37:24.318925 | orchestrator | 2025-10-08 15:37:24.318934 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-08 15:37:24.318944 | orchestrator | Wednesday 08 October 2025 15:37:23 +0000 (0:00:00.417) 0:00:54.715 ***** 2025-10-08 15:37:24.318953 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_1a05d404-282d-4245-ac59-6a85ac73ef0f) 2025-10-08 15:37:24.318963 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_1a05d404-282d-4245-ac59-6a85ac73ef0f) 2025-10-08 15:37:24.318973 | orchestrator | 2025-10-08 15:37:24.318982 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-08 15:37:24.318992 | orchestrator | Wednesday 08 October 2025 15:37:23 +0000 (0:00:00.458) 0:00:55.174 ***** 2025-10-08 15:37:24.319001 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-10-08 15:37:24.319010 | orchestrator | 2025-10-08 15:37:24.319045 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-08 15:37:24.319056 | orchestrator | Wednesday 08 October 2025 15:37:23 +0000 (0:00:00.347) 0:00:55.522 ***** 2025-10-08 15:37:24.319065 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-10-08 15:37:24.319075 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-10-08 15:37:24.319085 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-10-08 15:37:24.319094 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-10-08 15:37:24.319104 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-10-08 15:37:24.319113 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-10-08 15:37:24.319123 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-10-08 15:37:24.319132 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-10-08 15:37:24.319142 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-10-08 15:37:24.319151 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-10-08 15:37:24.319161 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-10-08 15:37:24.319176 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-10-08 15:37:33.656512 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-10-08 15:37:33.656646 | orchestrator | 2025-10-08 15:37:33.656664 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-08 15:37:33.656676 | orchestrator | Wednesday 08 October 2025 15:37:24 +0000 (0:00:00.412) 0:00:55.935 ***** 2025-10-08 15:37:33.656687 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:37:33.656702 | orchestrator | 2025-10-08 15:37:33.656723 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-08 15:37:33.656735 | orchestrator | Wednesday 08 October 2025 15:37:24 +0000 (0:00:00.197) 0:00:56.132 ***** 2025-10-08 15:37:33.656747 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:37:33.656758 | orchestrator | 2025-10-08 15:37:33.656769 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-08 15:37:33.656780 | orchestrator | Wednesday 08 October 2025 15:37:25 +0000 (0:00:00.661) 0:00:56.793 ***** 2025-10-08 15:37:33.656791 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:37:33.656803 | orchestrator | 2025-10-08 15:37:33.656814 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-08 15:37:33.656846 | orchestrator | Wednesday 08 October 2025 15:37:25 +0000 (0:00:00.226) 0:00:57.020 ***** 2025-10-08 15:37:33.656858 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:37:33.656869 | orchestrator | 2025-10-08 15:37:33.656880 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-08 15:37:33.656891 | orchestrator | Wednesday 08 October 2025 15:37:25 +0000 (0:00:00.207) 0:00:57.227 ***** 2025-10-08 15:37:33.656902 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:37:33.656913 | orchestrator | 2025-10-08 15:37:33.656924 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-08 15:37:33.656935 | orchestrator | Wednesday 08 October 2025 15:37:25 +0000 (0:00:00.222) 0:00:57.450 ***** 2025-10-08 15:37:33.656945 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:37:33.656956 | orchestrator | 2025-10-08 15:37:33.656967 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-08 15:37:33.656978 | orchestrator | Wednesday 08 October 2025 15:37:26 +0000 (0:00:00.223) 0:00:57.673 ***** 2025-10-08 15:37:33.656989 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:37:33.656999 | orchestrator | 2025-10-08 15:37:33.657010 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-08 15:37:33.657060 | orchestrator | Wednesday 08 October 2025 15:37:26 +0000 (0:00:00.253) 0:00:57.927 ***** 2025-10-08 15:37:33.657075 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:37:33.657087 | orchestrator | 2025-10-08 15:37:33.657100 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-08 15:37:33.657112 | orchestrator | Wednesday 08 October 2025 15:37:26 +0000 (0:00:00.233) 0:00:58.160 ***** 2025-10-08 15:37:33.657124 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-10-08 15:37:33.657137 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-10-08 15:37:33.657149 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-10-08 15:37:33.657162 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-10-08 15:37:33.657174 | orchestrator | 2025-10-08 15:37:33.657186 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-08 15:37:33.657198 | orchestrator | Wednesday 08 October 2025 15:37:27 +0000 (0:00:00.665) 0:00:58.826 ***** 2025-10-08 15:37:33.657209 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:37:33.657221 | orchestrator | 2025-10-08 15:37:33.657233 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-08 15:37:33.657245 | orchestrator | Wednesday 08 October 2025 15:37:27 +0000 (0:00:00.200) 0:00:59.026 ***** 2025-10-08 15:37:33.657257 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:37:33.657269 | orchestrator | 2025-10-08 15:37:33.657281 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-08 15:37:33.657293 | orchestrator | Wednesday 08 October 2025 15:37:27 +0000 (0:00:00.205) 0:00:59.232 ***** 2025-10-08 15:37:33.657305 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:37:33.657318 | orchestrator | 2025-10-08 15:37:33.657330 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-08 15:37:33.657342 | orchestrator | Wednesday 08 October 2025 15:37:27 +0000 (0:00:00.221) 0:00:59.453 ***** 2025-10-08 15:37:33.657354 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:37:33.657367 | orchestrator | 2025-10-08 15:37:33.657379 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-10-08 15:37:33.657391 | orchestrator | Wednesday 08 October 2025 15:37:28 +0000 (0:00:00.253) 0:00:59.707 ***** 2025-10-08 15:37:33.657403 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:37:33.657414 | orchestrator | 2025-10-08 15:37:33.657425 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-10-08 15:37:33.657436 | orchestrator | Wednesday 08 October 2025 15:37:28 +0000 (0:00:00.343) 0:01:00.050 ***** 2025-10-08 15:37:33.657447 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '93919d76-3b82-5996-a675-e75a55626347'}}) 2025-10-08 15:37:33.657458 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'cead9db5-2c40-515a-bcee-782342d5bd60'}}) 2025-10-08 15:37:33.657477 | orchestrator | 2025-10-08 15:37:33.657488 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-10-08 15:37:33.657499 | orchestrator | Wednesday 08 October 2025 15:37:28 +0000 (0:00:00.217) 0:01:00.268 ***** 2025-10-08 15:37:33.657511 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-93919d76-3b82-5996-a675-e75a55626347', 'data_vg': 'ceph-93919d76-3b82-5996-a675-e75a55626347'}) 2025-10-08 15:37:33.657523 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-cead9db5-2c40-515a-bcee-782342d5bd60', 'data_vg': 'ceph-cead9db5-2c40-515a-bcee-782342d5bd60'}) 2025-10-08 15:37:33.657534 | orchestrator | 2025-10-08 15:37:33.657545 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-10-08 15:37:33.657575 | orchestrator | Wednesday 08 October 2025 15:37:30 +0000 (0:00:01.922) 0:01:02.191 ***** 2025-10-08 15:37:33.657587 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-93919d76-3b82-5996-a675-e75a55626347', 'data_vg': 'ceph-93919d76-3b82-5996-a675-e75a55626347'})  2025-10-08 15:37:33.657600 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-cead9db5-2c40-515a-bcee-782342d5bd60', 'data_vg': 'ceph-cead9db5-2c40-515a-bcee-782342d5bd60'})  2025-10-08 15:37:33.657611 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:37:33.657622 | orchestrator | 2025-10-08 15:37:33.657633 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-10-08 15:37:33.657643 | orchestrator | Wednesday 08 October 2025 15:37:30 +0000 (0:00:00.171) 0:01:02.362 ***** 2025-10-08 15:37:33.657654 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-93919d76-3b82-5996-a675-e75a55626347', 'data_vg': 'ceph-93919d76-3b82-5996-a675-e75a55626347'}) 2025-10-08 15:37:33.657682 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-cead9db5-2c40-515a-bcee-782342d5bd60', 'data_vg': 'ceph-cead9db5-2c40-515a-bcee-782342d5bd60'}) 2025-10-08 15:37:33.657694 | orchestrator | 2025-10-08 15:37:33.657706 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-10-08 15:37:33.657716 | orchestrator | Wednesday 08 October 2025 15:37:32 +0000 (0:00:01.323) 0:01:03.686 ***** 2025-10-08 15:37:33.657727 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-93919d76-3b82-5996-a675-e75a55626347', 'data_vg': 'ceph-93919d76-3b82-5996-a675-e75a55626347'})  2025-10-08 15:37:33.657738 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-cead9db5-2c40-515a-bcee-782342d5bd60', 'data_vg': 'ceph-cead9db5-2c40-515a-bcee-782342d5bd60'})  2025-10-08 15:37:33.657749 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:37:33.657760 | orchestrator | 2025-10-08 15:37:33.657771 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-10-08 15:37:33.657781 | orchestrator | Wednesday 08 October 2025 15:37:32 +0000 (0:00:00.160) 0:01:03.846 ***** 2025-10-08 15:37:33.657792 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:37:33.657803 | orchestrator | 2025-10-08 15:37:33.657813 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-10-08 15:37:33.657824 | orchestrator | Wednesday 08 October 2025 15:37:32 +0000 (0:00:00.143) 0:01:03.990 ***** 2025-10-08 15:37:33.657835 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-93919d76-3b82-5996-a675-e75a55626347', 'data_vg': 'ceph-93919d76-3b82-5996-a675-e75a55626347'})  2025-10-08 15:37:33.657851 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-cead9db5-2c40-515a-bcee-782342d5bd60', 'data_vg': 'ceph-cead9db5-2c40-515a-bcee-782342d5bd60'})  2025-10-08 15:37:33.657862 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:37:33.657873 | orchestrator | 2025-10-08 15:37:33.657884 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-10-08 15:37:33.657895 | orchestrator | Wednesday 08 October 2025 15:37:32 +0000 (0:00:00.146) 0:01:04.137 ***** 2025-10-08 15:37:33.657906 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:37:33.657923 | orchestrator | 2025-10-08 15:37:33.657934 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-10-08 15:37:33.657945 | orchestrator | Wednesday 08 October 2025 15:37:32 +0000 (0:00:00.138) 0:01:04.275 ***** 2025-10-08 15:37:33.657955 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-93919d76-3b82-5996-a675-e75a55626347', 'data_vg': 'ceph-93919d76-3b82-5996-a675-e75a55626347'})  2025-10-08 15:37:33.657966 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-cead9db5-2c40-515a-bcee-782342d5bd60', 'data_vg': 'ceph-cead9db5-2c40-515a-bcee-782342d5bd60'})  2025-10-08 15:37:33.657977 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:37:33.657988 | orchestrator | 2025-10-08 15:37:33.657999 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-10-08 15:37:33.658010 | orchestrator | Wednesday 08 October 2025 15:37:32 +0000 (0:00:00.171) 0:01:04.447 ***** 2025-10-08 15:37:33.658106 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:37:33.658119 | orchestrator | 2025-10-08 15:37:33.658131 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-10-08 15:37:33.658142 | orchestrator | Wednesday 08 October 2025 15:37:32 +0000 (0:00:00.145) 0:01:04.593 ***** 2025-10-08 15:37:33.658152 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-93919d76-3b82-5996-a675-e75a55626347', 'data_vg': 'ceph-93919d76-3b82-5996-a675-e75a55626347'})  2025-10-08 15:37:33.658163 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-cead9db5-2c40-515a-bcee-782342d5bd60', 'data_vg': 'ceph-cead9db5-2c40-515a-bcee-782342d5bd60'})  2025-10-08 15:37:33.658174 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:37:33.658185 | orchestrator | 2025-10-08 15:37:33.658196 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-10-08 15:37:33.658207 | orchestrator | Wednesday 08 October 2025 15:37:33 +0000 (0:00:00.166) 0:01:04.759 ***** 2025-10-08 15:37:33.658218 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:37:33.658228 | orchestrator | 2025-10-08 15:37:33.658239 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-10-08 15:37:33.658250 | orchestrator | Wednesday 08 October 2025 15:37:33 +0000 (0:00:00.372) 0:01:05.131 ***** 2025-10-08 15:37:33.658270 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-93919d76-3b82-5996-a675-e75a55626347', 'data_vg': 'ceph-93919d76-3b82-5996-a675-e75a55626347'})  2025-10-08 15:37:39.802537 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-cead9db5-2c40-515a-bcee-782342d5bd60', 'data_vg': 'ceph-cead9db5-2c40-515a-bcee-782342d5bd60'})  2025-10-08 15:37:39.802640 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:37:39.802655 | orchestrator | 2025-10-08 15:37:39.802666 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-10-08 15:37:39.802678 | orchestrator | Wednesday 08 October 2025 15:37:33 +0000 (0:00:00.151) 0:01:05.283 ***** 2025-10-08 15:37:39.802688 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-93919d76-3b82-5996-a675-e75a55626347', 'data_vg': 'ceph-93919d76-3b82-5996-a675-e75a55626347'})  2025-10-08 15:37:39.802699 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-cead9db5-2c40-515a-bcee-782342d5bd60', 'data_vg': 'ceph-cead9db5-2c40-515a-bcee-782342d5bd60'})  2025-10-08 15:37:39.802709 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:37:39.802719 | orchestrator | 2025-10-08 15:37:39.802729 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-10-08 15:37:39.802739 | orchestrator | Wednesday 08 October 2025 15:37:33 +0000 (0:00:00.152) 0:01:05.435 ***** 2025-10-08 15:37:39.802749 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-93919d76-3b82-5996-a675-e75a55626347', 'data_vg': 'ceph-93919d76-3b82-5996-a675-e75a55626347'})  2025-10-08 15:37:39.802759 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-cead9db5-2c40-515a-bcee-782342d5bd60', 'data_vg': 'ceph-cead9db5-2c40-515a-bcee-782342d5bd60'})  2025-10-08 15:37:39.802768 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:37:39.802803 | orchestrator | 2025-10-08 15:37:39.802813 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-10-08 15:37:39.802823 | orchestrator | Wednesday 08 October 2025 15:37:33 +0000 (0:00:00.155) 0:01:05.590 ***** 2025-10-08 15:37:39.802832 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:37:39.802842 | orchestrator | 2025-10-08 15:37:39.802851 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-10-08 15:37:39.802861 | orchestrator | Wednesday 08 October 2025 15:37:34 +0000 (0:00:00.138) 0:01:05.729 ***** 2025-10-08 15:37:39.802871 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:37:39.802880 | orchestrator | 2025-10-08 15:37:39.802890 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-10-08 15:37:39.802899 | orchestrator | Wednesday 08 October 2025 15:37:34 +0000 (0:00:00.134) 0:01:05.863 ***** 2025-10-08 15:37:39.802909 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:37:39.802918 | orchestrator | 2025-10-08 15:37:39.802928 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-10-08 15:37:39.802951 | orchestrator | Wednesday 08 October 2025 15:37:34 +0000 (0:00:00.135) 0:01:05.999 ***** 2025-10-08 15:37:39.802962 | orchestrator | ok: [testbed-node-5] => { 2025-10-08 15:37:39.802972 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-10-08 15:37:39.802982 | orchestrator | } 2025-10-08 15:37:39.802992 | orchestrator | 2025-10-08 15:37:39.803001 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-10-08 15:37:39.803011 | orchestrator | Wednesday 08 October 2025 15:37:34 +0000 (0:00:00.147) 0:01:06.146 ***** 2025-10-08 15:37:39.803054 | orchestrator | ok: [testbed-node-5] => { 2025-10-08 15:37:39.803065 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-10-08 15:37:39.803076 | orchestrator | } 2025-10-08 15:37:39.803086 | orchestrator | 2025-10-08 15:37:39.803097 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-10-08 15:37:39.803107 | orchestrator | Wednesday 08 October 2025 15:37:34 +0000 (0:00:00.172) 0:01:06.319 ***** 2025-10-08 15:37:39.803118 | orchestrator | ok: [testbed-node-5] => { 2025-10-08 15:37:39.803129 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-10-08 15:37:39.803140 | orchestrator | } 2025-10-08 15:37:39.803151 | orchestrator | 2025-10-08 15:37:39.803162 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-10-08 15:37:39.803172 | orchestrator | Wednesday 08 October 2025 15:37:34 +0000 (0:00:00.157) 0:01:06.476 ***** 2025-10-08 15:37:39.803184 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:37:39.803195 | orchestrator | 2025-10-08 15:37:39.803205 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-10-08 15:37:39.803215 | orchestrator | Wednesday 08 October 2025 15:37:35 +0000 (0:00:00.498) 0:01:06.975 ***** 2025-10-08 15:37:39.803226 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:37:39.803236 | orchestrator | 2025-10-08 15:37:39.803247 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-10-08 15:37:39.803257 | orchestrator | Wednesday 08 October 2025 15:37:35 +0000 (0:00:00.515) 0:01:07.491 ***** 2025-10-08 15:37:39.803267 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:37:39.803278 | orchestrator | 2025-10-08 15:37:39.803288 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-10-08 15:37:39.803299 | orchestrator | Wednesday 08 October 2025 15:37:36 +0000 (0:00:00.743) 0:01:08.235 ***** 2025-10-08 15:37:39.803309 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:37:39.803320 | orchestrator | 2025-10-08 15:37:39.803330 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-10-08 15:37:39.803341 | orchestrator | Wednesday 08 October 2025 15:37:36 +0000 (0:00:00.160) 0:01:08.395 ***** 2025-10-08 15:37:39.803351 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:37:39.803362 | orchestrator | 2025-10-08 15:37:39.803372 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-10-08 15:37:39.803382 | orchestrator | Wednesday 08 October 2025 15:37:36 +0000 (0:00:00.121) 0:01:08.517 ***** 2025-10-08 15:37:39.803400 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:37:39.803411 | orchestrator | 2025-10-08 15:37:39.803422 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-10-08 15:37:39.803432 | orchestrator | Wednesday 08 October 2025 15:37:36 +0000 (0:00:00.100) 0:01:08.617 ***** 2025-10-08 15:37:39.803442 | orchestrator | ok: [testbed-node-5] => { 2025-10-08 15:37:39.803451 | orchestrator |  "vgs_report": { 2025-10-08 15:37:39.803461 | orchestrator |  "vg": [] 2025-10-08 15:37:39.803488 | orchestrator |  } 2025-10-08 15:37:39.803498 | orchestrator | } 2025-10-08 15:37:39.803508 | orchestrator | 2025-10-08 15:37:39.803518 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-10-08 15:37:39.803527 | orchestrator | Wednesday 08 October 2025 15:37:37 +0000 (0:00:00.132) 0:01:08.749 ***** 2025-10-08 15:37:39.803537 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:37:39.803547 | orchestrator | 2025-10-08 15:37:39.803556 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-10-08 15:37:39.803566 | orchestrator | Wednesday 08 October 2025 15:37:37 +0000 (0:00:00.146) 0:01:08.896 ***** 2025-10-08 15:37:39.803575 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:37:39.803585 | orchestrator | 2025-10-08 15:37:39.803594 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-10-08 15:37:39.803604 | orchestrator | Wednesday 08 October 2025 15:37:37 +0000 (0:00:00.147) 0:01:09.043 ***** 2025-10-08 15:37:39.803613 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:37:39.803623 | orchestrator | 2025-10-08 15:37:39.803632 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-10-08 15:37:39.803642 | orchestrator | Wednesday 08 October 2025 15:37:37 +0000 (0:00:00.143) 0:01:09.187 ***** 2025-10-08 15:37:39.803652 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:37:39.803661 | orchestrator | 2025-10-08 15:37:39.803671 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-10-08 15:37:39.803680 | orchestrator | Wednesday 08 October 2025 15:37:37 +0000 (0:00:00.155) 0:01:09.342 ***** 2025-10-08 15:37:39.803690 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:37:39.803699 | orchestrator | 2025-10-08 15:37:39.803709 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-10-08 15:37:39.803718 | orchestrator | Wednesday 08 October 2025 15:37:37 +0000 (0:00:00.150) 0:01:09.493 ***** 2025-10-08 15:37:39.803728 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:37:39.803737 | orchestrator | 2025-10-08 15:37:39.803747 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-10-08 15:37:39.803757 | orchestrator | Wednesday 08 October 2025 15:37:37 +0000 (0:00:00.137) 0:01:09.631 ***** 2025-10-08 15:37:39.803766 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:37:39.803776 | orchestrator | 2025-10-08 15:37:39.803785 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-10-08 15:37:39.803795 | orchestrator | Wednesday 08 October 2025 15:37:38 +0000 (0:00:00.128) 0:01:09.759 ***** 2025-10-08 15:37:39.803804 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:37:39.803814 | orchestrator | 2025-10-08 15:37:39.803823 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-10-08 15:37:39.803833 | orchestrator | Wednesday 08 October 2025 15:37:38 +0000 (0:00:00.362) 0:01:10.122 ***** 2025-10-08 15:37:39.803842 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:37:39.803852 | orchestrator | 2025-10-08 15:37:39.803861 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-10-08 15:37:39.803876 | orchestrator | Wednesday 08 October 2025 15:37:38 +0000 (0:00:00.144) 0:01:10.266 ***** 2025-10-08 15:37:39.803886 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:37:39.803896 | orchestrator | 2025-10-08 15:37:39.803905 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-10-08 15:37:39.803915 | orchestrator | Wednesday 08 October 2025 15:37:38 +0000 (0:00:00.148) 0:01:10.414 ***** 2025-10-08 15:37:39.803925 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:37:39.803940 | orchestrator | 2025-10-08 15:37:39.803950 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-10-08 15:37:39.803960 | orchestrator | Wednesday 08 October 2025 15:37:38 +0000 (0:00:00.142) 0:01:10.557 ***** 2025-10-08 15:37:39.803969 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:37:39.803979 | orchestrator | 2025-10-08 15:37:39.803989 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-10-08 15:37:39.803998 | orchestrator | Wednesday 08 October 2025 15:37:39 +0000 (0:00:00.137) 0:01:10.694 ***** 2025-10-08 15:37:39.804008 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:37:39.804045 | orchestrator | 2025-10-08 15:37:39.804057 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-10-08 15:37:39.804066 | orchestrator | Wednesday 08 October 2025 15:37:39 +0000 (0:00:00.132) 0:01:10.827 ***** 2025-10-08 15:37:39.804076 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:37:39.804086 | orchestrator | 2025-10-08 15:37:39.804095 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-10-08 15:37:39.804105 | orchestrator | Wednesday 08 October 2025 15:37:39 +0000 (0:00:00.146) 0:01:10.973 ***** 2025-10-08 15:37:39.804115 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-93919d76-3b82-5996-a675-e75a55626347', 'data_vg': 'ceph-93919d76-3b82-5996-a675-e75a55626347'})  2025-10-08 15:37:39.804125 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-cead9db5-2c40-515a-bcee-782342d5bd60', 'data_vg': 'ceph-cead9db5-2c40-515a-bcee-782342d5bd60'})  2025-10-08 15:37:39.804135 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:37:39.804144 | orchestrator | 2025-10-08 15:37:39.804154 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-10-08 15:37:39.804164 | orchestrator | Wednesday 08 October 2025 15:37:39 +0000 (0:00:00.147) 0:01:11.121 ***** 2025-10-08 15:37:39.804174 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-93919d76-3b82-5996-a675-e75a55626347', 'data_vg': 'ceph-93919d76-3b82-5996-a675-e75a55626347'})  2025-10-08 15:37:39.804183 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-cead9db5-2c40-515a-bcee-782342d5bd60', 'data_vg': 'ceph-cead9db5-2c40-515a-bcee-782342d5bd60'})  2025-10-08 15:37:39.804193 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:37:39.804203 | orchestrator | 2025-10-08 15:37:39.804213 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-10-08 15:37:39.804222 | orchestrator | Wednesday 08 October 2025 15:37:39 +0000 (0:00:00.152) 0:01:11.274 ***** 2025-10-08 15:37:39.804239 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-93919d76-3b82-5996-a675-e75a55626347', 'data_vg': 'ceph-93919d76-3b82-5996-a675-e75a55626347'})  2025-10-08 15:37:42.822408 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-cead9db5-2c40-515a-bcee-782342d5bd60', 'data_vg': 'ceph-cead9db5-2c40-515a-bcee-782342d5bd60'})  2025-10-08 15:37:42.822506 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:37:42.822523 | orchestrator | 2025-10-08 15:37:42.822536 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-10-08 15:37:42.822549 | orchestrator | Wednesday 08 October 2025 15:37:39 +0000 (0:00:00.154) 0:01:11.428 ***** 2025-10-08 15:37:42.822560 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-93919d76-3b82-5996-a675-e75a55626347', 'data_vg': 'ceph-93919d76-3b82-5996-a675-e75a55626347'})  2025-10-08 15:37:42.822572 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-cead9db5-2c40-515a-bcee-782342d5bd60', 'data_vg': 'ceph-cead9db5-2c40-515a-bcee-782342d5bd60'})  2025-10-08 15:37:42.822583 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:37:42.822594 | orchestrator | 2025-10-08 15:37:42.822605 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-10-08 15:37:42.822616 | orchestrator | Wednesday 08 October 2025 15:37:39 +0000 (0:00:00.153) 0:01:11.581 ***** 2025-10-08 15:37:42.822627 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-93919d76-3b82-5996-a675-e75a55626347', 'data_vg': 'ceph-93919d76-3b82-5996-a675-e75a55626347'})  2025-10-08 15:37:42.822664 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-cead9db5-2c40-515a-bcee-782342d5bd60', 'data_vg': 'ceph-cead9db5-2c40-515a-bcee-782342d5bd60'})  2025-10-08 15:37:42.822676 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:37:42.822687 | orchestrator | 2025-10-08 15:37:42.822698 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-10-08 15:37:42.822709 | orchestrator | Wednesday 08 October 2025 15:37:40 +0000 (0:00:00.155) 0:01:11.737 ***** 2025-10-08 15:37:42.822720 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-93919d76-3b82-5996-a675-e75a55626347', 'data_vg': 'ceph-93919d76-3b82-5996-a675-e75a55626347'})  2025-10-08 15:37:42.822731 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-cead9db5-2c40-515a-bcee-782342d5bd60', 'data_vg': 'ceph-cead9db5-2c40-515a-bcee-782342d5bd60'})  2025-10-08 15:37:42.822742 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:37:42.822753 | orchestrator | 2025-10-08 15:37:42.822764 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-10-08 15:37:42.822775 | orchestrator | Wednesday 08 October 2025 15:37:40 +0000 (0:00:00.363) 0:01:12.100 ***** 2025-10-08 15:37:42.822786 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-93919d76-3b82-5996-a675-e75a55626347', 'data_vg': 'ceph-93919d76-3b82-5996-a675-e75a55626347'})  2025-10-08 15:37:42.822797 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-cead9db5-2c40-515a-bcee-782342d5bd60', 'data_vg': 'ceph-cead9db5-2c40-515a-bcee-782342d5bd60'})  2025-10-08 15:37:42.822808 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:37:42.822819 | orchestrator | 2025-10-08 15:37:42.822830 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-10-08 15:37:42.822842 | orchestrator | Wednesday 08 October 2025 15:37:40 +0000 (0:00:00.157) 0:01:12.258 ***** 2025-10-08 15:37:42.822853 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-93919d76-3b82-5996-a675-e75a55626347', 'data_vg': 'ceph-93919d76-3b82-5996-a675-e75a55626347'})  2025-10-08 15:37:42.822864 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-cead9db5-2c40-515a-bcee-782342d5bd60', 'data_vg': 'ceph-cead9db5-2c40-515a-bcee-782342d5bd60'})  2025-10-08 15:37:42.822875 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:37:42.822886 | orchestrator | 2025-10-08 15:37:42.822897 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-10-08 15:37:42.822908 | orchestrator | Wednesday 08 October 2025 15:37:40 +0000 (0:00:00.163) 0:01:12.421 ***** 2025-10-08 15:37:42.822919 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:37:42.822931 | orchestrator | 2025-10-08 15:37:42.822944 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-10-08 15:37:42.822956 | orchestrator | Wednesday 08 October 2025 15:37:41 +0000 (0:00:00.523) 0:01:12.944 ***** 2025-10-08 15:37:42.822968 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:37:42.822980 | orchestrator | 2025-10-08 15:37:42.822992 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-10-08 15:37:42.823004 | orchestrator | Wednesday 08 October 2025 15:37:41 +0000 (0:00:00.497) 0:01:13.442 ***** 2025-10-08 15:37:42.823015 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:37:42.823056 | orchestrator | 2025-10-08 15:37:42.823068 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-10-08 15:37:42.823080 | orchestrator | Wednesday 08 October 2025 15:37:41 +0000 (0:00:00.143) 0:01:13.586 ***** 2025-10-08 15:37:42.823093 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-93919d76-3b82-5996-a675-e75a55626347', 'vg_name': 'ceph-93919d76-3b82-5996-a675-e75a55626347'}) 2025-10-08 15:37:42.823106 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-cead9db5-2c40-515a-bcee-782342d5bd60', 'vg_name': 'ceph-cead9db5-2c40-515a-bcee-782342d5bd60'}) 2025-10-08 15:37:42.823118 | orchestrator | 2025-10-08 15:37:42.823131 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-10-08 15:37:42.823150 | orchestrator | Wednesday 08 October 2025 15:37:42 +0000 (0:00:00.177) 0:01:13.764 ***** 2025-10-08 15:37:42.823179 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-93919d76-3b82-5996-a675-e75a55626347', 'data_vg': 'ceph-93919d76-3b82-5996-a675-e75a55626347'})  2025-10-08 15:37:42.823192 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-cead9db5-2c40-515a-bcee-782342d5bd60', 'data_vg': 'ceph-cead9db5-2c40-515a-bcee-782342d5bd60'})  2025-10-08 15:37:42.823205 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:37:42.823217 | orchestrator | 2025-10-08 15:37:42.823229 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-10-08 15:37:42.823240 | orchestrator | Wednesday 08 October 2025 15:37:42 +0000 (0:00:00.165) 0:01:13.929 ***** 2025-10-08 15:37:42.823252 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-93919d76-3b82-5996-a675-e75a55626347', 'data_vg': 'ceph-93919d76-3b82-5996-a675-e75a55626347'})  2025-10-08 15:37:42.823265 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-cead9db5-2c40-515a-bcee-782342d5bd60', 'data_vg': 'ceph-cead9db5-2c40-515a-bcee-782342d5bd60'})  2025-10-08 15:37:42.823278 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:37:42.823290 | orchestrator | 2025-10-08 15:37:42.823301 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-10-08 15:37:42.823312 | orchestrator | Wednesday 08 October 2025 15:37:42 +0000 (0:00:00.171) 0:01:14.100 ***** 2025-10-08 15:37:42.823323 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-93919d76-3b82-5996-a675-e75a55626347', 'data_vg': 'ceph-93919d76-3b82-5996-a675-e75a55626347'})  2025-10-08 15:37:42.823350 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-cead9db5-2c40-515a-bcee-782342d5bd60', 'data_vg': 'ceph-cead9db5-2c40-515a-bcee-782342d5bd60'})  2025-10-08 15:37:42.823362 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:37:42.823373 | orchestrator | 2025-10-08 15:37:42.823384 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-10-08 15:37:42.823395 | orchestrator | Wednesday 08 October 2025 15:37:42 +0000 (0:00:00.164) 0:01:14.265 ***** 2025-10-08 15:37:42.823406 | orchestrator | ok: [testbed-node-5] => { 2025-10-08 15:37:42.823417 | orchestrator |  "lvm_report": { 2025-10-08 15:37:42.823428 | orchestrator |  "lv": [ 2025-10-08 15:37:42.823438 | orchestrator |  { 2025-10-08 15:37:42.823449 | orchestrator |  "lv_name": "osd-block-93919d76-3b82-5996-a675-e75a55626347", 2025-10-08 15:37:42.823466 | orchestrator |  "vg_name": "ceph-93919d76-3b82-5996-a675-e75a55626347" 2025-10-08 15:37:42.823477 | orchestrator |  }, 2025-10-08 15:37:42.823488 | orchestrator |  { 2025-10-08 15:37:42.823499 | orchestrator |  "lv_name": "osd-block-cead9db5-2c40-515a-bcee-782342d5bd60", 2025-10-08 15:37:42.823510 | orchestrator |  "vg_name": "ceph-cead9db5-2c40-515a-bcee-782342d5bd60" 2025-10-08 15:37:42.823521 | orchestrator |  } 2025-10-08 15:37:42.823531 | orchestrator |  ], 2025-10-08 15:37:42.823542 | orchestrator |  "pv": [ 2025-10-08 15:37:42.823553 | orchestrator |  { 2025-10-08 15:37:42.823564 | orchestrator |  "pv_name": "/dev/sdb", 2025-10-08 15:37:42.823575 | orchestrator |  "vg_name": "ceph-93919d76-3b82-5996-a675-e75a55626347" 2025-10-08 15:37:42.823586 | orchestrator |  }, 2025-10-08 15:37:42.823596 | orchestrator |  { 2025-10-08 15:37:42.823607 | orchestrator |  "pv_name": "/dev/sdc", 2025-10-08 15:37:42.823618 | orchestrator |  "vg_name": "ceph-cead9db5-2c40-515a-bcee-782342d5bd60" 2025-10-08 15:37:42.823629 | orchestrator |  } 2025-10-08 15:37:42.823640 | orchestrator |  ] 2025-10-08 15:37:42.823650 | orchestrator |  } 2025-10-08 15:37:42.823661 | orchestrator | } 2025-10-08 15:37:42.823672 | orchestrator | 2025-10-08 15:37:42.823684 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-08 15:37:42.823701 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-10-08 15:37:42.823712 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-10-08 15:37:42.823723 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-10-08 15:37:42.823734 | orchestrator | 2025-10-08 15:37:42.823745 | orchestrator | 2025-10-08 15:37:42.823756 | orchestrator | 2025-10-08 15:37:42.823767 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-08 15:37:42.823778 | orchestrator | Wednesday 08 October 2025 15:37:42 +0000 (0:00:00.157) 0:01:14.422 ***** 2025-10-08 15:37:42.823789 | orchestrator | =============================================================================== 2025-10-08 15:37:42.823799 | orchestrator | Create block VGs -------------------------------------------------------- 5.77s 2025-10-08 15:37:42.823810 | orchestrator | Create block LVs -------------------------------------------------------- 4.20s 2025-10-08 15:37:42.823821 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.76s 2025-10-08 15:37:42.823832 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.73s 2025-10-08 15:37:42.823843 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.56s 2025-10-08 15:37:42.823853 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.54s 2025-10-08 15:37:42.823864 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.53s 2025-10-08 15:37:42.823875 | orchestrator | Add known partitions to the list of available block devices ------------- 1.42s 2025-10-08 15:37:42.823892 | orchestrator | Add known links to the list of available block devices ------------------ 1.40s 2025-10-08 15:37:43.229370 | orchestrator | Add known partitions to the list of available block devices ------------- 1.15s 2025-10-08 15:37:43.229464 | orchestrator | Print LVM report data --------------------------------------------------- 0.94s 2025-10-08 15:37:43.229478 | orchestrator | Add known partitions to the list of available block devices ------------- 0.91s 2025-10-08 15:37:43.229489 | orchestrator | Add known links to the list of available block devices ------------------ 0.90s 2025-10-08 15:37:43.229500 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.80s 2025-10-08 15:37:43.229511 | orchestrator | Add known partitions to the list of available block devices ------------- 0.72s 2025-10-08 15:37:43.229522 | orchestrator | Print 'Create WAL VGs' -------------------------------------------------- 0.72s 2025-10-08 15:37:43.229533 | orchestrator | Fail if block LV defined in lvm_volumes is missing ---------------------- 0.71s 2025-10-08 15:37:43.229544 | orchestrator | Add known links to the list of available block devices ------------------ 0.70s 2025-10-08 15:37:43.229555 | orchestrator | Create DB LVs for ceph_db_devices --------------------------------------- 0.70s 2025-10-08 15:37:43.229566 | orchestrator | Get initial list of available block devices ----------------------------- 0.69s 2025-10-08 15:37:55.545692 | orchestrator | 2025-10-08 15:37:55 | INFO  | Task 8eaa2e3b-e903-4ea7-bfb1-7c2ee68dd677 (facts) was prepared for execution. 2025-10-08 15:37:55.545783 | orchestrator | 2025-10-08 15:37:55 | INFO  | It takes a moment until task 8eaa2e3b-e903-4ea7-bfb1-7c2ee68dd677 (facts) has been started and output is visible here. 2025-10-08 15:38:07.216216 | orchestrator | 2025-10-08 15:38:07.216325 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-10-08 15:38:07.216341 | orchestrator | 2025-10-08 15:38:07.216353 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-10-08 15:38:07.216365 | orchestrator | Wednesday 08 October 2025 15:37:59 +0000 (0:00:00.240) 0:00:00.240 ***** 2025-10-08 15:38:07.216377 | orchestrator | ok: [testbed-manager] 2025-10-08 15:38:07.216390 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:38:07.216425 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:38:07.216436 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:38:07.216447 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:38:07.216458 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:38:07.216469 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:38:07.216480 | orchestrator | 2025-10-08 15:38:07.216491 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-10-08 15:38:07.216502 | orchestrator | Wednesday 08 October 2025 15:38:00 +0000 (0:00:01.042) 0:00:01.282 ***** 2025-10-08 15:38:07.216527 | orchestrator | skipping: [testbed-manager] 2025-10-08 15:38:07.216539 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:38:07.216552 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:38:07.216563 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:38:07.216574 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:38:07.216585 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:38:07.216597 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:38:07.216608 | orchestrator | 2025-10-08 15:38:07.216619 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-10-08 15:38:07.216630 | orchestrator | 2025-10-08 15:38:07.216642 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-10-08 15:38:07.216652 | orchestrator | Wednesday 08 October 2025 15:38:01 +0000 (0:00:01.107) 0:00:02.390 ***** 2025-10-08 15:38:07.216664 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:38:07.216675 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:38:07.216686 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:38:07.216697 | orchestrator | ok: [testbed-manager] 2025-10-08 15:38:07.216708 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:38:07.216719 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:38:07.216730 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:38:07.216742 | orchestrator | 2025-10-08 15:38:07.216753 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-10-08 15:38:07.216766 | orchestrator | 2025-10-08 15:38:07.216778 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-10-08 15:38:07.216791 | orchestrator | Wednesday 08 October 2025 15:38:06 +0000 (0:00:04.853) 0:00:07.244 ***** 2025-10-08 15:38:07.216803 | orchestrator | skipping: [testbed-manager] 2025-10-08 15:38:07.216816 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:38:07.216828 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:38:07.216841 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:38:07.216853 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:38:07.216866 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:38:07.216878 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:38:07.216890 | orchestrator | 2025-10-08 15:38:07.216903 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-08 15:38:07.216915 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-08 15:38:07.216930 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-08 15:38:07.216942 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-08 15:38:07.216955 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-08 15:38:07.216968 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-08 15:38:07.216980 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-08 15:38:07.216993 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-08 15:38:07.217013 | orchestrator | 2025-10-08 15:38:07.217050 | orchestrator | 2025-10-08 15:38:07.217063 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-08 15:38:07.217076 | orchestrator | Wednesday 08 October 2025 15:38:06 +0000 (0:00:00.463) 0:00:07.707 ***** 2025-10-08 15:38:07.217089 | orchestrator | =============================================================================== 2025-10-08 15:38:07.217101 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.85s 2025-10-08 15:38:07.217114 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.11s 2025-10-08 15:38:07.217126 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.04s 2025-10-08 15:38:07.217138 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.46s 2025-10-08 15:38:19.317013 | orchestrator | 2025-10-08 15:38:19 | INFO  | Task 7a6c2037-28e2-429e-8d3d-508c4e815e1d (frr) was prepared for execution. 2025-10-08 15:38:19.317179 | orchestrator | 2025-10-08 15:38:19 | INFO  | It takes a moment until task 7a6c2037-28e2-429e-8d3d-508c4e815e1d (frr) has been started and output is visible here. 2025-10-08 15:38:45.903358 | orchestrator | 2025-10-08 15:38:45.903417 | orchestrator | PLAY [Apply role frr] ********************************************************** 2025-10-08 15:38:45.903431 | orchestrator | 2025-10-08 15:38:45.903443 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2025-10-08 15:38:45.903455 | orchestrator | Wednesday 08 October 2025 15:38:23 +0000 (0:00:00.230) 0:00:00.230 ***** 2025-10-08 15:38:45.903466 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2025-10-08 15:38:45.903479 | orchestrator | 2025-10-08 15:38:45.903490 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2025-10-08 15:38:45.903501 | orchestrator | Wednesday 08 October 2025 15:38:23 +0000 (0:00:00.233) 0:00:00.464 ***** 2025-10-08 15:38:45.903513 | orchestrator | changed: [testbed-manager] 2025-10-08 15:38:45.903525 | orchestrator | 2025-10-08 15:38:45.903536 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2025-10-08 15:38:45.903547 | orchestrator | Wednesday 08 October 2025 15:38:24 +0000 (0:00:01.098) 0:00:01.562 ***** 2025-10-08 15:38:45.903558 | orchestrator | changed: [testbed-manager] 2025-10-08 15:38:45.903569 | orchestrator | 2025-10-08 15:38:45.903595 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2025-10-08 15:38:45.903606 | orchestrator | Wednesday 08 October 2025 15:38:35 +0000 (0:00:10.162) 0:00:11.725 ***** 2025-10-08 15:38:45.903617 | orchestrator | ok: [testbed-manager] 2025-10-08 15:38:45.903630 | orchestrator | 2025-10-08 15:38:45.903641 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2025-10-08 15:38:45.903652 | orchestrator | Wednesday 08 October 2025 15:38:36 +0000 (0:00:01.064) 0:00:12.789 ***** 2025-10-08 15:38:45.903662 | orchestrator | changed: [testbed-manager] 2025-10-08 15:38:45.903673 | orchestrator | 2025-10-08 15:38:45.903684 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2025-10-08 15:38:45.903695 | orchestrator | Wednesday 08 October 2025 15:38:37 +0000 (0:00:00.968) 0:00:13.758 ***** 2025-10-08 15:38:45.903706 | orchestrator | ok: [testbed-manager] 2025-10-08 15:38:45.903717 | orchestrator | 2025-10-08 15:38:45.903728 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2025-10-08 15:38:45.903739 | orchestrator | Wednesday 08 October 2025 15:38:38 +0000 (0:00:01.270) 0:00:15.029 ***** 2025-10-08 15:38:45.903750 | orchestrator | ok: [testbed-manager -> localhost] 2025-10-08 15:38:45.903761 | orchestrator | 2025-10-08 15:38:45.903772 | orchestrator | TASK [osism.services.frr : Copy file from the configuration repository: /etc/frr/frr.conf] *** 2025-10-08 15:38:45.903783 | orchestrator | Wednesday 08 October 2025 15:38:39 +0000 (0:00:00.811) 0:00:15.840 ***** 2025-10-08 15:38:45.903794 | orchestrator | skipping: [testbed-manager] 2025-10-08 15:38:45.903805 | orchestrator | 2025-10-08 15:38:45.903816 | orchestrator | TASK [osism.services.frr : Copy file from the role: /etc/frr/frr.conf] ********* 2025-10-08 15:38:45.903842 | orchestrator | Wednesday 08 October 2025 15:38:39 +0000 (0:00:00.156) 0:00:15.996 ***** 2025-10-08 15:38:45.903853 | orchestrator | changed: [testbed-manager] 2025-10-08 15:38:45.903864 | orchestrator | 2025-10-08 15:38:45.903875 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2025-10-08 15:38:45.903886 | orchestrator | Wednesday 08 October 2025 15:38:40 +0000 (0:00:01.036) 0:00:17.033 ***** 2025-10-08 15:38:45.903896 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2025-10-08 15:38:45.903907 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2025-10-08 15:38:45.903918 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2025-10-08 15:38:45.903930 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2025-10-08 15:38:45.903943 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2025-10-08 15:38:45.903956 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2025-10-08 15:38:45.903968 | orchestrator | 2025-10-08 15:38:45.903980 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2025-10-08 15:38:45.903992 | orchestrator | Wednesday 08 October 2025 15:38:42 +0000 (0:00:02.259) 0:00:19.292 ***** 2025-10-08 15:38:45.904004 | orchestrator | ok: [testbed-manager] 2025-10-08 15:38:45.904016 | orchestrator | 2025-10-08 15:38:45.904079 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2025-10-08 15:38:45.904092 | orchestrator | Wednesday 08 October 2025 15:38:44 +0000 (0:00:01.638) 0:00:20.931 ***** 2025-10-08 15:38:45.904104 | orchestrator | changed: [testbed-manager] 2025-10-08 15:38:45.904116 | orchestrator | 2025-10-08 15:38:45.904128 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-08 15:38:45.904140 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-10-08 15:38:45.904152 | orchestrator | 2025-10-08 15:38:45.904164 | orchestrator | 2025-10-08 15:38:45.904176 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-08 15:38:45.904189 | orchestrator | Wednesday 08 October 2025 15:38:45 +0000 (0:00:01.358) 0:00:22.290 ***** 2025-10-08 15:38:45.904200 | orchestrator | =============================================================================== 2025-10-08 15:38:45.904213 | orchestrator | osism.services.frr : Install frr package ------------------------------- 10.16s 2025-10-08 15:38:45.904224 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 2.26s 2025-10-08 15:38:45.904236 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.64s 2025-10-08 15:38:45.904248 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.36s 2025-10-08 15:38:45.904275 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.27s 2025-10-08 15:38:45.904288 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 1.10s 2025-10-08 15:38:45.904299 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 1.06s 2025-10-08 15:38:45.904310 | orchestrator | osism.services.frr : Copy file from the role: /etc/frr/frr.conf --------- 1.04s 2025-10-08 15:38:45.904320 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 0.97s 2025-10-08 15:38:45.904331 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.81s 2025-10-08 15:38:45.904342 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.23s 2025-10-08 15:38:45.904352 | orchestrator | osism.services.frr : Copy file from the configuration repository: /etc/frr/frr.conf --- 0.16s 2025-10-08 15:38:46.138235 | orchestrator | 2025-10-08 15:38:46.141577 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Wed Oct 8 15:38:46 UTC 2025 2025-10-08 15:38:46.141630 | orchestrator | 2025-10-08 15:38:47.868365 | orchestrator | 2025-10-08 15:38:47 | INFO  | Collection nutshell is prepared for execution 2025-10-08 15:38:47.868436 | orchestrator | 2025-10-08 15:38:47 | INFO  | D [0] - dotfiles 2025-10-08 15:38:57.892000 | orchestrator | 2025-10-08 15:38:57 | INFO  | D [0] - homer 2025-10-08 15:38:57.892162 | orchestrator | 2025-10-08 15:38:57 | INFO  | D [0] - netdata 2025-10-08 15:38:57.892179 | orchestrator | 2025-10-08 15:38:57 | INFO  | D [0] - openstackclient 2025-10-08 15:38:57.892191 | orchestrator | 2025-10-08 15:38:57 | INFO  | D [0] - phpmyadmin 2025-10-08 15:38:57.892816 | orchestrator | 2025-10-08 15:38:57 | INFO  | A [0] - common 2025-10-08 15:38:57.897608 | orchestrator | 2025-10-08 15:38:57 | INFO  | A [1] -- loadbalancer 2025-10-08 15:38:57.897642 | orchestrator | 2025-10-08 15:38:57 | INFO  | D [2] --- opensearch 2025-10-08 15:38:57.897655 | orchestrator | 2025-10-08 15:38:57 | INFO  | A [2] --- mariadb-ng 2025-10-08 15:38:57.897948 | orchestrator | 2025-10-08 15:38:57 | INFO  | D [3] ---- horizon 2025-10-08 15:38:57.898310 | orchestrator | 2025-10-08 15:38:57 | INFO  | A [3] ---- keystone 2025-10-08 15:38:57.898515 | orchestrator | 2025-10-08 15:38:57 | INFO  | A [4] ----- neutron 2025-10-08 15:38:57.898858 | orchestrator | 2025-10-08 15:38:57 | INFO  | A [5] ------ wait-for-nova 2025-10-08 15:38:57.899157 | orchestrator | 2025-10-08 15:38:57 | INFO  | D [6] ------- octavia 2025-10-08 15:38:57.900988 | orchestrator | 2025-10-08 15:38:57 | INFO  | D [4] ----- barbican 2025-10-08 15:38:57.901057 | orchestrator | 2025-10-08 15:38:57 | INFO  | D [4] ----- designate 2025-10-08 15:38:57.901397 | orchestrator | 2025-10-08 15:38:57 | INFO  | D [4] ----- ironic 2025-10-08 15:38:57.901422 | orchestrator | 2025-10-08 15:38:57 | INFO  | D [4] ----- placement 2025-10-08 15:38:57.901766 | orchestrator | 2025-10-08 15:38:57 | INFO  | D [4] ----- magnum 2025-10-08 15:38:57.902522 | orchestrator | 2025-10-08 15:38:57 | INFO  | A [1] -- openvswitch 2025-10-08 15:38:57.902845 | orchestrator | 2025-10-08 15:38:57 | INFO  | D [2] --- ovn 2025-10-08 15:38:57.903162 | orchestrator | 2025-10-08 15:38:57 | INFO  | D [1] -- memcached 2025-10-08 15:38:57.903338 | orchestrator | 2025-10-08 15:38:57 | INFO  | D [1] -- redis 2025-10-08 15:38:57.903641 | orchestrator | 2025-10-08 15:38:57 | INFO  | D [1] -- rabbitmq-ng 2025-10-08 15:38:57.904014 | orchestrator | 2025-10-08 15:38:57 | INFO  | A [0] - kubernetes 2025-10-08 15:38:57.906844 | orchestrator | 2025-10-08 15:38:57 | INFO  | D [1] -- kubeconfig 2025-10-08 15:38:57.906879 | orchestrator | 2025-10-08 15:38:57 | INFO  | A [1] -- copy-kubeconfig 2025-10-08 15:38:57.907007 | orchestrator | 2025-10-08 15:38:57 | INFO  | A [0] - ceph 2025-10-08 15:38:57.909742 | orchestrator | 2025-10-08 15:38:57 | INFO  | A [1] -- ceph-pools 2025-10-08 15:38:57.909777 | orchestrator | 2025-10-08 15:38:57 | INFO  | A [2] --- copy-ceph-keys 2025-10-08 15:38:57.909789 | orchestrator | 2025-10-08 15:38:57 | INFO  | A [3] ---- cephclient 2025-10-08 15:38:57.910217 | orchestrator | 2025-10-08 15:38:57 | INFO  | D [4] ----- ceph-bootstrap-dashboard 2025-10-08 15:38:57.910246 | orchestrator | 2025-10-08 15:38:57 | INFO  | A [4] ----- wait-for-keystone 2025-10-08 15:38:57.910262 | orchestrator | 2025-10-08 15:38:57 | INFO  | D [5] ------ kolla-ceph-rgw 2025-10-08 15:38:57.910689 | orchestrator | 2025-10-08 15:38:57 | INFO  | D [5] ------ glance 2025-10-08 15:38:57.910717 | orchestrator | 2025-10-08 15:38:57 | INFO  | D [5] ------ cinder 2025-10-08 15:38:57.910932 | orchestrator | 2025-10-08 15:38:57 | INFO  | D [5] ------ nova 2025-10-08 15:38:57.911650 | orchestrator | 2025-10-08 15:38:57 | INFO  | A [4] ----- prometheus 2025-10-08 15:38:57.911677 | orchestrator | 2025-10-08 15:38:57 | INFO  | D [5] ------ grafana 2025-10-08 15:38:58.127997 | orchestrator | 2025-10-08 15:38:58 | INFO  | All tasks of the collection nutshell are prepared for execution 2025-10-08 15:38:58.128107 | orchestrator | 2025-10-08 15:38:58 | INFO  | Tasks are running in the background 2025-10-08 15:39:00.948558 | orchestrator | 2025-10-08 15:39:00 | INFO  | No task IDs specified, wait for all currently running tasks 2025-10-08 15:39:03.057545 | orchestrator | 2025-10-08 15:39:03 | INFO  | Task cbd441a0-c07e-44e0-bbfb-fdfc76610d6d is in state STARTED 2025-10-08 15:39:03.058710 | orchestrator | 2025-10-08 15:39:03 | INFO  | Task a15a1f05-1127-4caa-b7e2-f42d0efa0f77 is in state STARTED 2025-10-08 15:39:03.060214 | orchestrator | 2025-10-08 15:39:03 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:39:03.060689 | orchestrator | 2025-10-08 15:39:03 | INFO  | Task 8f07373b-e87c-4206-88d6-cf39a577abb2 is in state STARTED 2025-10-08 15:39:03.061295 | orchestrator | 2025-10-08 15:39:03 | INFO  | Task 5ae649af-e6a9-45af-84eb-af02680e5c84 is in state STARTED 2025-10-08 15:39:03.064335 | orchestrator | 2025-10-08 15:39:03 | INFO  | Task 52d535b8-c342-437c-b4a2-fb3818784700 is in state STARTED 2025-10-08 15:39:03.064768 | orchestrator | 2025-10-08 15:39:03 | INFO  | Task 087ca851-291e-4b40-8718-7364f936cdb7 is in state STARTED 2025-10-08 15:39:03.064795 | orchestrator | 2025-10-08 15:39:03 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:39:06.158007 | orchestrator | 2025-10-08 15:39:06 | INFO  | Task cbd441a0-c07e-44e0-bbfb-fdfc76610d6d is in state STARTED 2025-10-08 15:39:06.161554 | orchestrator | 2025-10-08 15:39:06 | INFO  | Task a15a1f05-1127-4caa-b7e2-f42d0efa0f77 is in state STARTED 2025-10-08 15:39:06.162877 | orchestrator | 2025-10-08 15:39:06 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:39:06.172688 | orchestrator | 2025-10-08 15:39:06 | INFO  | Task 8f07373b-e87c-4206-88d6-cf39a577abb2 is in state STARTED 2025-10-08 15:39:06.172720 | orchestrator | 2025-10-08 15:39:06 | INFO  | Task 5ae649af-e6a9-45af-84eb-af02680e5c84 is in state STARTED 2025-10-08 15:39:06.172732 | orchestrator | 2025-10-08 15:39:06 | INFO  | Task 52d535b8-c342-437c-b4a2-fb3818784700 is in state STARTED 2025-10-08 15:39:06.172744 | orchestrator | 2025-10-08 15:39:06 | INFO  | Task 087ca851-291e-4b40-8718-7364f936cdb7 is in state STARTED 2025-10-08 15:39:06.172756 | orchestrator | 2025-10-08 15:39:06 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:39:09.196908 | orchestrator | 2025-10-08 15:39:09 | INFO  | Task cbd441a0-c07e-44e0-bbfb-fdfc76610d6d is in state STARTED 2025-10-08 15:39:09.197240 | orchestrator | 2025-10-08 15:39:09 | INFO  | Task a15a1f05-1127-4caa-b7e2-f42d0efa0f77 is in state STARTED 2025-10-08 15:39:09.197748 | orchestrator | 2025-10-08 15:39:09 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:39:09.201004 | orchestrator | 2025-10-08 15:39:09 | INFO  | Task 8f07373b-e87c-4206-88d6-cf39a577abb2 is in state STARTED 2025-10-08 15:39:09.201558 | orchestrator | 2025-10-08 15:39:09 | INFO  | Task 5ae649af-e6a9-45af-84eb-af02680e5c84 is in state STARTED 2025-10-08 15:39:09.202158 | orchestrator | 2025-10-08 15:39:09 | INFO  | Task 52d535b8-c342-437c-b4a2-fb3818784700 is in state STARTED 2025-10-08 15:39:09.202672 | orchestrator | 2025-10-08 15:39:09 | INFO  | Task 087ca851-291e-4b40-8718-7364f936cdb7 is in state STARTED 2025-10-08 15:39:09.202693 | orchestrator | 2025-10-08 15:39:09 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:39:12.273204 | orchestrator | 2025-10-08 15:39:12 | INFO  | Task cbd441a0-c07e-44e0-bbfb-fdfc76610d6d is in state STARTED 2025-10-08 15:39:12.277435 | orchestrator | 2025-10-08 15:39:12 | INFO  | Task a15a1f05-1127-4caa-b7e2-f42d0efa0f77 is in state STARTED 2025-10-08 15:39:12.277840 | orchestrator | 2025-10-08 15:39:12 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:39:12.278439 | orchestrator | 2025-10-08 15:39:12 | INFO  | Task 8f07373b-e87c-4206-88d6-cf39a577abb2 is in state STARTED 2025-10-08 15:39:12.278979 | orchestrator | 2025-10-08 15:39:12 | INFO  | Task 5ae649af-e6a9-45af-84eb-af02680e5c84 is in state STARTED 2025-10-08 15:39:12.280779 | orchestrator | 2025-10-08 15:39:12 | INFO  | Task 52d535b8-c342-437c-b4a2-fb3818784700 is in state STARTED 2025-10-08 15:39:12.284825 | orchestrator | 2025-10-08 15:39:12 | INFO  | Task 087ca851-291e-4b40-8718-7364f936cdb7 is in state STARTED 2025-10-08 15:39:12.284853 | orchestrator | 2025-10-08 15:39:12 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:39:15.585576 | orchestrator | 2025-10-08 15:39:15 | INFO  | Task cbd441a0-c07e-44e0-bbfb-fdfc76610d6d is in state STARTED 2025-10-08 15:39:15.585784 | orchestrator | 2025-10-08 15:39:15 | INFO  | Task a15a1f05-1127-4caa-b7e2-f42d0efa0f77 is in state STARTED 2025-10-08 15:39:15.586315 | orchestrator | 2025-10-08 15:39:15 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:39:15.586980 | orchestrator | 2025-10-08 15:39:15 | INFO  | Task 8f07373b-e87c-4206-88d6-cf39a577abb2 is in state STARTED 2025-10-08 15:39:15.587634 | orchestrator | 2025-10-08 15:39:15 | INFO  | Task 5ae649af-e6a9-45af-84eb-af02680e5c84 is in state STARTED 2025-10-08 15:39:15.588046 | orchestrator | 2025-10-08 15:39:15 | INFO  | Task 52d535b8-c342-437c-b4a2-fb3818784700 is in state STARTED 2025-10-08 15:39:15.588747 | orchestrator | 2025-10-08 15:39:15 | INFO  | Task 087ca851-291e-4b40-8718-7364f936cdb7 is in state STARTED 2025-10-08 15:39:15.588768 | orchestrator | 2025-10-08 15:39:15 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:39:18.656366 | orchestrator | 2025-10-08 15:39:18 | INFO  | Task cbd441a0-c07e-44e0-bbfb-fdfc76610d6d is in state STARTED 2025-10-08 15:39:18.656453 | orchestrator | 2025-10-08 15:39:18 | INFO  | Task a15a1f05-1127-4caa-b7e2-f42d0efa0f77 is in state STARTED 2025-10-08 15:39:18.661268 | orchestrator | 2025-10-08 15:39:18 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:39:18.673306 | orchestrator | 2025-10-08 15:39:18 | INFO  | Task 8f07373b-e87c-4206-88d6-cf39a577abb2 is in state STARTED 2025-10-08 15:39:18.673320 | orchestrator | 2025-10-08 15:39:18 | INFO  | Task 5ae649af-e6a9-45af-84eb-af02680e5c84 is in state STARTED 2025-10-08 15:39:18.674142 | orchestrator | 2025-10-08 15:39:18 | INFO  | Task 52d535b8-c342-437c-b4a2-fb3818784700 is in state STARTED 2025-10-08 15:39:18.674156 | orchestrator | 2025-10-08 15:39:18 | INFO  | Task 087ca851-291e-4b40-8718-7364f936cdb7 is in state STARTED 2025-10-08 15:39:18.674164 | orchestrator | 2025-10-08 15:39:18 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:39:21.728320 | orchestrator | 2025-10-08 15:39:21 | INFO  | Task cbd441a0-c07e-44e0-bbfb-fdfc76610d6d is in state STARTED 2025-10-08 15:39:21.728414 | orchestrator | 2025-10-08 15:39:21 | INFO  | Task a15a1f05-1127-4caa-b7e2-f42d0efa0f77 is in state STARTED 2025-10-08 15:39:21.728428 | orchestrator | 2025-10-08 15:39:21 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:39:21.728439 | orchestrator | 2025-10-08 15:39:21 | INFO  | Task 8f07373b-e87c-4206-88d6-cf39a577abb2 is in state STARTED 2025-10-08 15:39:21.728472 | orchestrator | 2025-10-08 15:39:21 | INFO  | Task 5ae649af-e6a9-45af-84eb-af02680e5c84 is in state STARTED 2025-10-08 15:39:21.728482 | orchestrator | 2025-10-08 15:39:21 | INFO  | Task 52d535b8-c342-437c-b4a2-fb3818784700 is in state STARTED 2025-10-08 15:39:21.728492 | orchestrator | 2025-10-08 15:39:21 | INFO  | Task 087ca851-291e-4b40-8718-7364f936cdb7 is in state STARTED 2025-10-08 15:39:21.728502 | orchestrator | 2025-10-08 15:39:21 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:39:25.069067 | orchestrator | 2025-10-08 15:39:25.069156 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2025-10-08 15:39:25.069167 | orchestrator | 2025-10-08 15:39:25.069176 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2025-10-08 15:39:25.069184 | orchestrator | Wednesday 08 October 2025 15:39:11 +0000 (0:00:00.595) 0:00:00.595 ***** 2025-10-08 15:39:25.069193 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:39:25.069202 | orchestrator | changed: [testbed-manager] 2025-10-08 15:39:25.069210 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:39:25.069218 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:39:25.069225 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:39:25.069233 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:39:25.069241 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:39:25.069249 | orchestrator | 2025-10-08 15:39:25.069257 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2025-10-08 15:39:25.069265 | orchestrator | Wednesday 08 October 2025 15:39:15 +0000 (0:00:04.405) 0:00:05.000 ***** 2025-10-08 15:39:25.069274 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-10-08 15:39:25.069282 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-10-08 15:39:25.069290 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-10-08 15:39:25.069298 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-10-08 15:39:25.069306 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-10-08 15:39:25.069314 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-10-08 15:39:25.069321 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-10-08 15:39:25.069329 | orchestrator | 2025-10-08 15:39:25.069337 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2025-10-08 15:39:25.069346 | orchestrator | Wednesday 08 October 2025 15:39:16 +0000 (0:00:01.572) 0:00:06.573 ***** 2025-10-08 15:39:25.069365 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-10-08 15:39:16.547177', 'end': '2025-10-08 15:39:16.655790', 'delta': '0:00:00.108613', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-10-08 15:39:25.069381 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-10-08 15:39:16.510368', 'end': '2025-10-08 15:39:16.518899', 'delta': '0:00:00.008531', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-10-08 15:39:25.069409 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-10-08 15:39:16.516064', 'end': '2025-10-08 15:39:16.525421', 'delta': '0:00:00.009357', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-10-08 15:39:25.069440 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-10-08 15:39:16.568689', 'end': '2025-10-08 15:39:16.575061', 'delta': '0:00:00.006372', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-10-08 15:39:25.069449 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-10-08 15:39:16.589695', 'end': '2025-10-08 15:39:16.600482', 'delta': '0:00:00.010787', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-10-08 15:39:25.069458 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-10-08 15:39:16.562019', 'end': '2025-10-08 15:39:16.569029', 'delta': '0:00:00.007010', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-10-08 15:39:25.069471 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-10-08 15:39:16.711528', 'end': '2025-10-08 15:39:16.721760', 'delta': '0:00:00.010232', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-10-08 15:39:25.069492 | orchestrator | 2025-10-08 15:39:25.069501 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2025-10-08 15:39:25.069509 | orchestrator | Wednesday 08 October 2025 15:39:18 +0000 (0:00:02.002) 0:00:08.576 ***** 2025-10-08 15:39:25.069517 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-10-08 15:39:25.069525 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-10-08 15:39:25.069534 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-10-08 15:39:25.069542 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-10-08 15:39:25.069550 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-10-08 15:39:25.069558 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-10-08 15:39:25.069565 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-10-08 15:39:25.069574 | orchestrator | 2025-10-08 15:39:25.069582 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2025-10-08 15:39:25.069590 | orchestrator | Wednesday 08 October 2025 15:39:21 +0000 (0:00:02.330) 0:00:10.907 ***** 2025-10-08 15:39:25.069598 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2025-10-08 15:39:25.069606 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2025-10-08 15:39:25.069614 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2025-10-08 15:39:25.069622 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2025-10-08 15:39:25.069630 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2025-10-08 15:39:25.069637 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2025-10-08 15:39:25.069646 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2025-10-08 15:39:25.069653 | orchestrator | 2025-10-08 15:39:25.069661 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-08 15:39:25.069676 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-08 15:39:25.069686 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-08 15:39:25.069694 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-08 15:39:25.069702 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-08 15:39:25.069710 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-08 15:39:25.069717 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-08 15:39:25.069725 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-08 15:39:25.069733 | orchestrator | 2025-10-08 15:39:25.069741 | orchestrator | 2025-10-08 15:39:25.069749 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-08 15:39:25.069757 | orchestrator | Wednesday 08 October 2025 15:39:23 +0000 (0:00:02.201) 0:00:13.108 ***** 2025-10-08 15:39:25.069765 | orchestrator | =============================================================================== 2025-10-08 15:39:25.069773 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 4.41s 2025-10-08 15:39:25.069781 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 2.33s 2025-10-08 15:39:25.069794 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 2.20s 2025-10-08 15:39:25.069803 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 2.00s 2025-10-08 15:39:25.069810 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 1.57s 2025-10-08 15:39:25.069818 | orchestrator | 2025-10-08 15:39:24 | INFO  | Task cbd441a0-c07e-44e0-bbfb-fdfc76610d6d is in state STARTED 2025-10-08 15:39:25.069827 | orchestrator | 2025-10-08 15:39:24 | INFO  | Task a15a1f05-1127-4caa-b7e2-f42d0efa0f77 is in state STARTED 2025-10-08 15:39:25.069835 | orchestrator | 2025-10-08 15:39:24 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:39:25.069843 | orchestrator | 2025-10-08 15:39:24 | INFO  | Task 8f07373b-e87c-4206-88d6-cf39a577abb2 is in state STARTED 2025-10-08 15:39:25.069851 | orchestrator | 2025-10-08 15:39:24 | INFO  | Task 5ae649af-e6a9-45af-84eb-af02680e5c84 is in state STARTED 2025-10-08 15:39:25.069862 | orchestrator | 2025-10-08 15:39:24 | INFO  | Task 52d535b8-c342-437c-b4a2-fb3818784700 is in state STARTED 2025-10-08 15:39:25.069870 | orchestrator | 2025-10-08 15:39:24 | INFO  | Task 087ca851-291e-4b40-8718-7364f936cdb7 is in state SUCCESS 2025-10-08 15:39:25.069879 | orchestrator | 2025-10-08 15:39:24 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:39:28.058320 | orchestrator | 2025-10-08 15:39:28 | INFO  | Task cbd441a0-c07e-44e0-bbfb-fdfc76610d6d is in state STARTED 2025-10-08 15:39:28.058424 | orchestrator | 2025-10-08 15:39:28 | INFO  | Task a15a1f05-1127-4caa-b7e2-f42d0efa0f77 is in state STARTED 2025-10-08 15:39:28.060782 | orchestrator | 2025-10-08 15:39:28 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:39:28.065280 | orchestrator | 2025-10-08 15:39:28 | INFO  | Task 8f07373b-e87c-4206-88d6-cf39a577abb2 is in state STARTED 2025-10-08 15:39:28.065377 | orchestrator | 2025-10-08 15:39:28 | INFO  | Task 5ae649af-e6a9-45af-84eb-af02680e5c84 is in state STARTED 2025-10-08 15:39:28.067662 | orchestrator | 2025-10-08 15:39:28 | INFO  | Task 52d535b8-c342-437c-b4a2-fb3818784700 is in state STARTED 2025-10-08 15:39:28.072118 | orchestrator | 2025-10-08 15:39:28 | INFO  | Task 240b2620-3980-40c2-8172-24ae2bc12e13 is in state STARTED 2025-10-08 15:39:28.072170 | orchestrator | 2025-10-08 15:39:28 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:39:31.112546 | orchestrator | 2025-10-08 15:39:31 | INFO  | Task cbd441a0-c07e-44e0-bbfb-fdfc76610d6d is in state STARTED 2025-10-08 15:39:31.112712 | orchestrator | 2025-10-08 15:39:31 | INFO  | Task a15a1f05-1127-4caa-b7e2-f42d0efa0f77 is in state STARTED 2025-10-08 15:39:31.113240 | orchestrator | 2025-10-08 15:39:31 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:39:31.113751 | orchestrator | 2025-10-08 15:39:31 | INFO  | Task 8f07373b-e87c-4206-88d6-cf39a577abb2 is in state STARTED 2025-10-08 15:39:31.114291 | orchestrator | 2025-10-08 15:39:31 | INFO  | Task 5ae649af-e6a9-45af-84eb-af02680e5c84 is in state STARTED 2025-10-08 15:39:31.114787 | orchestrator | 2025-10-08 15:39:31 | INFO  | Task 52d535b8-c342-437c-b4a2-fb3818784700 is in state STARTED 2025-10-08 15:39:31.115317 | orchestrator | 2025-10-08 15:39:31 | INFO  | Task 240b2620-3980-40c2-8172-24ae2bc12e13 is in state STARTED 2025-10-08 15:39:31.115337 | orchestrator | 2025-10-08 15:39:31 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:39:34.199446 | orchestrator | 2025-10-08 15:39:34 | INFO  | Task cbd441a0-c07e-44e0-bbfb-fdfc76610d6d is in state STARTED 2025-10-08 15:39:34.199703 | orchestrator | 2025-10-08 15:39:34 | INFO  | Task a15a1f05-1127-4caa-b7e2-f42d0efa0f77 is in state STARTED 2025-10-08 15:39:34.202093 | orchestrator | 2025-10-08 15:39:34 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:39:34.202750 | orchestrator | 2025-10-08 15:39:34 | INFO  | Task 8f07373b-e87c-4206-88d6-cf39a577abb2 is in state STARTED 2025-10-08 15:39:34.203312 | orchestrator | 2025-10-08 15:39:34 | INFO  | Task 5ae649af-e6a9-45af-84eb-af02680e5c84 is in state STARTED 2025-10-08 15:39:34.205142 | orchestrator | 2025-10-08 15:39:34 | INFO  | Task 52d535b8-c342-437c-b4a2-fb3818784700 is in state STARTED 2025-10-08 15:39:34.206425 | orchestrator | 2025-10-08 15:39:34 | INFO  | Task 240b2620-3980-40c2-8172-24ae2bc12e13 is in state STARTED 2025-10-08 15:39:34.206450 | orchestrator | 2025-10-08 15:39:34 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:39:37.274989 | orchestrator | 2025-10-08 15:39:37 | INFO  | Task cbd441a0-c07e-44e0-bbfb-fdfc76610d6d is in state STARTED 2025-10-08 15:39:37.277130 | orchestrator | 2025-10-08 15:39:37 | INFO  | Task a15a1f05-1127-4caa-b7e2-f42d0efa0f77 is in state STARTED 2025-10-08 15:39:37.277816 | orchestrator | 2025-10-08 15:39:37 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:39:37.279446 | orchestrator | 2025-10-08 15:39:37 | INFO  | Task 8f07373b-e87c-4206-88d6-cf39a577abb2 is in state STARTED 2025-10-08 15:39:37.280527 | orchestrator | 2025-10-08 15:39:37 | INFO  | Task 5ae649af-e6a9-45af-84eb-af02680e5c84 is in state STARTED 2025-10-08 15:39:37.281808 | orchestrator | 2025-10-08 15:39:37 | INFO  | Task 52d535b8-c342-437c-b4a2-fb3818784700 is in state STARTED 2025-10-08 15:39:37.282692 | orchestrator | 2025-10-08 15:39:37 | INFO  | Task 240b2620-3980-40c2-8172-24ae2bc12e13 is in state STARTED 2025-10-08 15:39:37.282722 | orchestrator | 2025-10-08 15:39:37 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:39:40.337158 | orchestrator | 2025-10-08 15:39:40 | INFO  | Task cbd441a0-c07e-44e0-bbfb-fdfc76610d6d is in state STARTED 2025-10-08 15:39:40.337511 | orchestrator | 2025-10-08 15:39:40 | INFO  | Task a15a1f05-1127-4caa-b7e2-f42d0efa0f77 is in state STARTED 2025-10-08 15:39:40.338539 | orchestrator | 2025-10-08 15:39:40 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:39:40.340229 | orchestrator | 2025-10-08 15:39:40 | INFO  | Task 8f07373b-e87c-4206-88d6-cf39a577abb2 is in state STARTED 2025-10-08 15:39:40.343878 | orchestrator | 2025-10-08 15:39:40 | INFO  | Task 5ae649af-e6a9-45af-84eb-af02680e5c84 is in state STARTED 2025-10-08 15:39:40.345332 | orchestrator | 2025-10-08 15:39:40 | INFO  | Task 52d535b8-c342-437c-b4a2-fb3818784700 is in state STARTED 2025-10-08 15:39:40.347711 | orchestrator | 2025-10-08 15:39:40 | INFO  | Task 240b2620-3980-40c2-8172-24ae2bc12e13 is in state STARTED 2025-10-08 15:39:40.347740 | orchestrator | 2025-10-08 15:39:40 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:39:43.375612 | orchestrator | 2025-10-08 15:39:43[0m | INFO  | Task cbd441a0-c07e-44e0-bbfb-fdfc76610d6d is in state STARTED 2025-10-08 15:39:43.377930 | orchestrator | 2025-10-08 15:39:43 | INFO  | Task a15a1f05-1127-4caa-b7e2-f42d0efa0f77 is in state STARTED 2025-10-08 15:39:43.378733 | orchestrator | 2025-10-08 15:39:43 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:39:43.380107 | orchestrator | 2025-10-08 15:39:43 | INFO  | Task 8f07373b-e87c-4206-88d6-cf39a577abb2 is in state STARTED 2025-10-08 15:39:43.380546 | orchestrator | 2025-10-08 15:39:43 | INFO  | Task 5ae649af-e6a9-45af-84eb-af02680e5c84 is in state STARTED 2025-10-08 15:39:43.381487 | orchestrator | 2025-10-08 15:39:43 | INFO  | Task 52d535b8-c342-437c-b4a2-fb3818784700 is in state STARTED 2025-10-08 15:39:43.383013 | orchestrator | 2025-10-08 15:39:43 | INFO  | Task 240b2620-3980-40c2-8172-24ae2bc12e13 is in state STARTED 2025-10-08 15:39:43.383290 | orchestrator | 2025-10-08 15:39:43 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:39:46.603159 | orchestrator | 2025-10-08 15:39:46 | INFO  | Task cbd441a0-c07e-44e0-bbfb-fdfc76610d6d is in state STARTED 2025-10-08 15:39:46.603283 | orchestrator | 2025-10-08 15:39:46 | INFO  | Task a15a1f05-1127-4caa-b7e2-f42d0efa0f77 is in state STARTED 2025-10-08 15:39:46.603311 | orchestrator | 2025-10-08 15:39:46 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:39:46.603333 | orchestrator | 2025-10-08 15:39:46 | INFO  | Task 8f07373b-e87c-4206-88d6-cf39a577abb2 is in state STARTED 2025-10-08 15:39:46.603352 | orchestrator | 2025-10-08 15:39:46 | INFO  | Task 5ae649af-e6a9-45af-84eb-af02680e5c84 is in state STARTED 2025-10-08 15:39:46.603371 | orchestrator | 2025-10-08 15:39:46 | INFO  | Task 52d535b8-c342-437c-b4a2-fb3818784700 is in state STARTED 2025-10-08 15:39:46.603390 | orchestrator | 2025-10-08 15:39:46 | INFO  | Task 240b2620-3980-40c2-8172-24ae2bc12e13 is in state STARTED 2025-10-08 15:39:46.603410 | orchestrator | 2025-10-08 15:39:46 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:39:49.547937 | orchestrator | 2025-10-08 15:39:49 | INFO  | Task cbd441a0-c07e-44e0-bbfb-fdfc76610d6d is in state STARTED 2025-10-08 15:39:49.548078 | orchestrator | 2025-10-08 15:39:49 | INFO  | Task a15a1f05-1127-4caa-b7e2-f42d0efa0f77 is in state STARTED 2025-10-08 15:39:49.548094 | orchestrator | 2025-10-08 15:39:49 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:39:49.548106 | orchestrator | 2025-10-08 15:39:49 | INFO  | Task 8f07373b-e87c-4206-88d6-cf39a577abb2 is in state STARTED 2025-10-08 15:39:49.548437 | orchestrator | 2025-10-08 15:39:49 | INFO  | Task 5ae649af-e6a9-45af-84eb-af02680e5c84 is in state STARTED 2025-10-08 15:39:49.548451 | orchestrator | 2025-10-08 15:39:49 | INFO  | Task 52d535b8-c342-437c-b4a2-fb3818784700 is in state STARTED 2025-10-08 15:39:49.548463 | orchestrator | 2025-10-08 15:39:49 | INFO  | Task 240b2620-3980-40c2-8172-24ae2bc12e13 is in state STARTED 2025-10-08 15:39:49.548475 | orchestrator | 2025-10-08 15:39:49 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:39:52.563637 | orchestrator | 2025-10-08 15:39:52 | INFO  | Task cbd441a0-c07e-44e0-bbfb-fdfc76610d6d is in state SUCCESS 2025-10-08 15:39:52.563914 | orchestrator | 2025-10-08 15:39:52 | INFO  | Task a15a1f05-1127-4caa-b7e2-f42d0efa0f77 is in state STARTED 2025-10-08 15:39:52.565529 | orchestrator | 2025-10-08 15:39:52 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:39:52.565557 | orchestrator | 2025-10-08 15:39:52 | INFO  | Task 8f07373b-e87c-4206-88d6-cf39a577abb2 is in state STARTED 2025-10-08 15:39:52.566386 | orchestrator | 2025-10-08 15:39:52 | INFO  | Task 5ae649af-e6a9-45af-84eb-af02680e5c84 is in state STARTED 2025-10-08 15:39:52.567385 | orchestrator | 2025-10-08 15:39:52 | INFO  | Task 52d535b8-c342-437c-b4a2-fb3818784700 is in state STARTED 2025-10-08 15:39:52.574002 | orchestrator | 2025-10-08 15:39:52 | INFO  | Task 240b2620-3980-40c2-8172-24ae2bc12e13 is in state STARTED 2025-10-08 15:39:52.574140 | orchestrator | 2025-10-08 15:39:52 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:39:55.693077 | orchestrator | 2025-10-08 15:39:55 | INFO  | Task a15a1f05-1127-4caa-b7e2-f42d0efa0f77 is in state STARTED 2025-10-08 15:39:55.693284 | orchestrator | 2025-10-08 15:39:55 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:39:55.694267 | orchestrator | 2025-10-08 15:39:55 | INFO  | Task 8f07373b-e87c-4206-88d6-cf39a577abb2 is in state STARTED 2025-10-08 15:39:55.695168 | orchestrator | 2025-10-08 15:39:55 | INFO  | Task 5ae649af-e6a9-45af-84eb-af02680e5c84 is in state STARTED 2025-10-08 15:39:55.695583 | orchestrator | 2025-10-08 15:39:55 | INFO  | Task 52d535b8-c342-437c-b4a2-fb3818784700 is in state STARTED 2025-10-08 15:39:55.696641 | orchestrator | 2025-10-08 15:39:55 | INFO  | Task 240b2620-3980-40c2-8172-24ae2bc12e13 is in state STARTED 2025-10-08 15:39:55.696679 | orchestrator | 2025-10-08 15:39:55 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:39:58.880103 | orchestrator | 2025-10-08 15:39:58 | INFO  | Task a15a1f05-1127-4caa-b7e2-f42d0efa0f77 is in state STARTED 2025-10-08 15:39:58.881081 | orchestrator | 2025-10-08 15:39:58 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:39:58.884752 | orchestrator | 2025-10-08 15:39:58 | INFO  | Task 8f07373b-e87c-4206-88d6-cf39a577abb2 is in state STARTED 2025-10-08 15:39:58.886174 | orchestrator | 2025-10-08 15:39:58 | INFO  | Task 5ae649af-e6a9-45af-84eb-af02680e5c84 is in state STARTED 2025-10-08 15:39:58.886786 | orchestrator | 2025-10-08 15:39:58 | INFO  | Task 52d535b8-c342-437c-b4a2-fb3818784700 is in state SUCCESS 2025-10-08 15:39:58.893346 | orchestrator | 2025-10-08 15:39:58 | INFO  | Task 240b2620-3980-40c2-8172-24ae2bc12e13 is in state STARTED 2025-10-08 15:39:58.893374 | orchestrator | 2025-10-08 15:39:58 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:40:01.937314 | orchestrator | 2025-10-08 15:40:01 | INFO  | Task a15a1f05-1127-4caa-b7e2-f42d0efa0f77 is in state STARTED 2025-10-08 15:40:01.938585 | orchestrator | 2025-10-08 15:40:01 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:40:01.940466 | orchestrator | 2025-10-08 15:40:01 | INFO  | Task 8f07373b-e87c-4206-88d6-cf39a577abb2 is in state STARTED 2025-10-08 15:40:01.942119 | orchestrator | 2025-10-08 15:40:01 | INFO  | Task 5ae649af-e6a9-45af-84eb-af02680e5c84 is in state STARTED 2025-10-08 15:40:01.943600 | orchestrator | 2025-10-08 15:40:01 | INFO  | Task 240b2620-3980-40c2-8172-24ae2bc12e13 is in state STARTED 2025-10-08 15:40:01.943768 | orchestrator | 2025-10-08 15:40:01 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:40:05.100208 | orchestrator | 2025-10-08 15:40:05 | INFO  | Task a15a1f05-1127-4caa-b7e2-f42d0efa0f77 is in state STARTED 2025-10-08 15:40:05.101043 | orchestrator | 2025-10-08 15:40:05 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:40:05.101964 | orchestrator | 2025-10-08 15:40:05 | INFO  | Task 8f07373b-e87c-4206-88d6-cf39a577abb2 is in state STARTED 2025-10-08 15:40:05.103148 | orchestrator | 2025-10-08 15:40:05 | INFO  | Task 5ae649af-e6a9-45af-84eb-af02680e5c84 is in state STARTED 2025-10-08 15:40:05.104474 | orchestrator | 2025-10-08 15:40:05 | INFO  | Task 240b2620-3980-40c2-8172-24ae2bc12e13 is in state STARTED 2025-10-08 15:40:05.104645 | orchestrator | 2025-10-08 15:40:05 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:40:08.276597 | orchestrator | 2025-10-08 15:40:08 | INFO  | Task a15a1f05-1127-4caa-b7e2-f42d0efa0f77 is in state STARTED 2025-10-08 15:40:08.278923 | orchestrator | 2025-10-08 15:40:08 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:40:08.288168 | orchestrator | 2025-10-08 15:40:08 | INFO  | Task 8f07373b-e87c-4206-88d6-cf39a577abb2 is in state STARTED 2025-10-08 15:40:08.291707 | orchestrator | 2025-10-08 15:40:08 | INFO  | Task 5ae649af-e6a9-45af-84eb-af02680e5c84 is in state STARTED 2025-10-08 15:40:08.294948 | orchestrator | 2025-10-08 15:40:08 | INFO  | Task 240b2620-3980-40c2-8172-24ae2bc12e13 is in state STARTED 2025-10-08 15:40:08.295989 | orchestrator | 2025-10-08 15:40:08 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:40:11.382138 | orchestrator | 2025-10-08 15:40:11 | INFO  | Task a15a1f05-1127-4caa-b7e2-f42d0efa0f77 is in state STARTED 2025-10-08 15:40:11.384461 | orchestrator | 2025-10-08 15:40:11 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:40:11.385943 | orchestrator | 2025-10-08 15:40:11 | INFO  | Task 8f07373b-e87c-4206-88d6-cf39a577abb2 is in state STARTED 2025-10-08 15:40:11.389623 | orchestrator | 2025-10-08 15:40:11 | INFO  | Task 5ae649af-e6a9-45af-84eb-af02680e5c84 is in state STARTED 2025-10-08 15:40:11.392324 | orchestrator | 2025-10-08 15:40:11 | INFO  | Task 240b2620-3980-40c2-8172-24ae2bc12e13 is in state STARTED 2025-10-08 15:40:11.392353 | orchestrator | 2025-10-08 15:40:11 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:40:14.456796 | orchestrator | 2025-10-08 15:40:14 | INFO  | Task a15a1f05-1127-4caa-b7e2-f42d0efa0f77 is in state STARTED 2025-10-08 15:40:14.456907 | orchestrator | 2025-10-08 15:40:14 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:40:14.456930 | orchestrator | 2025-10-08 15:40:14 | INFO  | Task 8f07373b-e87c-4206-88d6-cf39a577abb2 is in state STARTED 2025-10-08 15:40:14.456974 | orchestrator | 2025-10-08 15:40:14 | INFO  | Task 5ae649af-e6a9-45af-84eb-af02680e5c84 is in state STARTED 2025-10-08 15:40:14.457547 | orchestrator | 2025-10-08 15:40:14 | INFO  | Task 240b2620-3980-40c2-8172-24ae2bc12e13 is in state STARTED 2025-10-08 15:40:14.457592 | orchestrator | 2025-10-08 15:40:14 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:40:17.501538 | orchestrator | 2025-10-08 15:40:17 | INFO  | Task a15a1f05-1127-4caa-b7e2-f42d0efa0f77 is in state STARTED 2025-10-08 15:40:17.504616 | orchestrator | 2025-10-08 15:40:17 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:40:17.505675 | orchestrator | 2025-10-08 15:40:17 | INFO  | Task 8f07373b-e87c-4206-88d6-cf39a577abb2 is in state STARTED 2025-10-08 15:40:17.512115 | orchestrator | 2025-10-08 15:40:17 | INFO  | Task 5ae649af-e6a9-45af-84eb-af02680e5c84 is in state STARTED 2025-10-08 15:40:17.518692 | orchestrator | 2025-10-08 15:40:17 | INFO  | Task 240b2620-3980-40c2-8172-24ae2bc12e13 is in state STARTED 2025-10-08 15:40:17.518723 | orchestrator | 2025-10-08 15:40:17 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:40:20.580627 | orchestrator | 2025-10-08 15:40:20 | INFO  | Task a15a1f05-1127-4caa-b7e2-f42d0efa0f77 is in state STARTED 2025-10-08 15:40:20.580715 | orchestrator | 2025-10-08 15:40:20 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:40:20.580727 | orchestrator | 2025-10-08 15:40:20 | INFO  | Task 8f07373b-e87c-4206-88d6-cf39a577abb2 is in state STARTED 2025-10-08 15:40:20.580738 | orchestrator | 2025-10-08 15:40:20 | INFO  | Task 5ae649af-e6a9-45af-84eb-af02680e5c84 is in state STARTED 2025-10-08 15:40:20.581130 | orchestrator | 2025-10-08 15:40:20 | INFO  | Task 240b2620-3980-40c2-8172-24ae2bc12e13 is in state STARTED 2025-10-08 15:40:20.581151 | orchestrator | 2025-10-08 15:40:20 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:40:23.611807 | orchestrator | 2025-10-08 15:40:23 | INFO  | Task a15a1f05-1127-4caa-b7e2-f42d0efa0f77 is in state STARTED 2025-10-08 15:40:23.612376 | orchestrator | 2025-10-08 15:40:23 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:40:23.613789 | orchestrator | 2025-10-08 15:40:23 | INFO  | Task 8f07373b-e87c-4206-88d6-cf39a577abb2 is in state STARTED 2025-10-08 15:40:23.614972 | orchestrator | 2025-10-08 15:40:23 | INFO  | Task 5ae649af-e6a9-45af-84eb-af02680e5c84 is in state STARTED 2025-10-08 15:40:23.617389 | orchestrator | 2025-10-08 15:40:23 | INFO  | Task 240b2620-3980-40c2-8172-24ae2bc12e13 is in state STARTED 2025-10-08 15:40:23.617413 | orchestrator | 2025-10-08 15:40:23 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:40:26.659817 | orchestrator | 2025-10-08 15:40:26 | INFO  | Task a15a1f05-1127-4caa-b7e2-f42d0efa0f77 is in state STARTED 2025-10-08 15:40:26.660169 | orchestrator | 2025-10-08 15:40:26 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:40:26.660282 | orchestrator | 2025-10-08 15:40:26 | INFO  | Task 8f07373b-e87c-4206-88d6-cf39a577abb2 is in state STARTED 2025-10-08 15:40:26.661132 | orchestrator | 2025-10-08 15:40:26 | INFO  | Task 5ae649af-e6a9-45af-84eb-af02680e5c84 is in state SUCCESS 2025-10-08 15:40:26.661364 | orchestrator | 2025-10-08 15:40:26 | INFO  | Task 240b2620-3980-40c2-8172-24ae2bc12e13 is in state SUCCESS 2025-10-08 15:40:26.661486 | orchestrator | 2025-10-08 15:40:26 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:40:26.662867 | orchestrator | 2025-10-08 15:40:26.662913 | orchestrator | 2025-10-08 15:40:26.662926 | orchestrator | PLAY [Apply role homer] ******************************************************** 2025-10-08 15:40:26.662938 | orchestrator | 2025-10-08 15:40:26.662949 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2025-10-08 15:40:26.662961 | orchestrator | Wednesday 08 October 2025 15:39:11 +0000 (0:00:00.903) 0:00:00.903 ***** 2025-10-08 15:40:26.662973 | orchestrator | ok: [testbed-manager] => { 2025-10-08 15:40:26.662986 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2025-10-08 15:40:26.662999 | orchestrator | } 2025-10-08 15:40:26.663010 | orchestrator | 2025-10-08 15:40:26.663022 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2025-10-08 15:40:26.663059 | orchestrator | Wednesday 08 October 2025 15:39:12 +0000 (0:00:00.720) 0:00:01.624 ***** 2025-10-08 15:40:26.663070 | orchestrator | ok: [testbed-manager] 2025-10-08 15:40:26.663083 | orchestrator | 2025-10-08 15:40:26.663094 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2025-10-08 15:40:26.663104 | orchestrator | Wednesday 08 October 2025 15:39:13 +0000 (0:00:01.522) 0:00:03.146 ***** 2025-10-08 15:40:26.663116 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2025-10-08 15:40:26.663127 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2025-10-08 15:40:26.663138 | orchestrator | 2025-10-08 15:40:26.663149 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2025-10-08 15:40:26.663189 | orchestrator | Wednesday 08 October 2025 15:39:14 +0000 (0:00:01.160) 0:00:04.307 ***** 2025-10-08 15:40:26.663202 | orchestrator | changed: [testbed-manager] 2025-10-08 15:40:26.663213 | orchestrator | 2025-10-08 15:40:26.663224 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2025-10-08 15:40:26.663235 | orchestrator | Wednesday 08 October 2025 15:39:17 +0000 (0:00:02.581) 0:00:06.888 ***** 2025-10-08 15:40:26.663246 | orchestrator | changed: [testbed-manager] 2025-10-08 15:40:26.663257 | orchestrator | 2025-10-08 15:40:26.663268 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2025-10-08 15:40:26.663279 | orchestrator | Wednesday 08 October 2025 15:39:19 +0000 (0:00:01.634) 0:00:08.522 ***** 2025-10-08 15:40:26.663290 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2025-10-08 15:40:26.663301 | orchestrator | ok: [testbed-manager] 2025-10-08 15:40:26.663312 | orchestrator | 2025-10-08 15:40:26.663323 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2025-10-08 15:40:26.663334 | orchestrator | Wednesday 08 October 2025 15:39:48 +0000 (0:00:29.069) 0:00:37.591 ***** 2025-10-08 15:40:26.663363 | orchestrator | changed: [testbed-manager] 2025-10-08 15:40:26.663375 | orchestrator | 2025-10-08 15:40:26.663385 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-08 15:40:26.663396 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-08 15:40:26.663409 | orchestrator | 2025-10-08 15:40:26.663420 | orchestrator | 2025-10-08 15:40:26.663431 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-08 15:40:26.663443 | orchestrator | Wednesday 08 October 2025 15:39:52 +0000 (0:00:03.985) 0:00:41.577 ***** 2025-10-08 15:40:26.663455 | orchestrator | =============================================================================== 2025-10-08 15:40:26.663467 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 29.07s 2025-10-08 15:40:26.663478 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 3.99s 2025-10-08 15:40:26.663490 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 2.58s 2025-10-08 15:40:26.663502 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 1.63s 2025-10-08 15:40:26.663514 | orchestrator | osism.services.homer : Create traefik external network ------------------ 1.52s 2025-10-08 15:40:26.663526 | orchestrator | osism.services.homer : Create required directories ---------------------- 1.16s 2025-10-08 15:40:26.663537 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.72s 2025-10-08 15:40:26.663549 | orchestrator | 2025-10-08 15:40:26.663561 | orchestrator | 2025-10-08 15:40:26.663574 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2025-10-08 15:40:26.663586 | orchestrator | 2025-10-08 15:40:26.663598 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2025-10-08 15:40:26.663610 | orchestrator | Wednesday 08 October 2025 15:39:11 +0000 (0:00:01.190) 0:00:01.190 ***** 2025-10-08 15:40:26.663622 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2025-10-08 15:40:26.663636 | orchestrator | 2025-10-08 15:40:26.663647 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2025-10-08 15:40:26.663660 | orchestrator | Wednesday 08 October 2025 15:39:12 +0000 (0:00:00.428) 0:00:01.618 ***** 2025-10-08 15:40:26.663672 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2025-10-08 15:40:26.663684 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2025-10-08 15:40:26.663696 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2025-10-08 15:40:26.663708 | orchestrator | 2025-10-08 15:40:26.663720 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2025-10-08 15:40:26.663732 | orchestrator | Wednesday 08 October 2025 15:39:14 +0000 (0:00:02.008) 0:00:03.627 ***** 2025-10-08 15:40:26.663744 | orchestrator | changed: [testbed-manager] 2025-10-08 15:40:26.663756 | orchestrator | 2025-10-08 15:40:26.663768 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2025-10-08 15:40:26.663780 | orchestrator | Wednesday 08 October 2025 15:39:17 +0000 (0:00:03.319) 0:00:06.946 ***** 2025-10-08 15:40:26.663803 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2025-10-08 15:40:26.663815 | orchestrator | ok: [testbed-manager] 2025-10-08 15:40:26.663826 | orchestrator | 2025-10-08 15:40:26.663837 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2025-10-08 15:40:26.663848 | orchestrator | Wednesday 08 October 2025 15:39:48 +0000 (0:00:30.660) 0:00:37.607 ***** 2025-10-08 15:40:26.663858 | orchestrator | changed: [testbed-manager] 2025-10-08 15:40:26.663869 | orchestrator | 2025-10-08 15:40:26.663880 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2025-10-08 15:40:26.663891 | orchestrator | Wednesday 08 October 2025 15:39:50 +0000 (0:00:02.052) 0:00:39.659 ***** 2025-10-08 15:40:26.663907 | orchestrator | ok: [testbed-manager] 2025-10-08 15:40:26.663918 | orchestrator | 2025-10-08 15:40:26.663929 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2025-10-08 15:40:26.663940 | orchestrator | Wednesday 08 October 2025 15:39:51 +0000 (0:00:00.724) 0:00:40.383 ***** 2025-10-08 15:40:26.663950 | orchestrator | changed: [testbed-manager] 2025-10-08 15:40:26.663961 | orchestrator | 2025-10-08 15:40:26.663972 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2025-10-08 15:40:26.663982 | orchestrator | Wednesday 08 October 2025 15:39:53 +0000 (0:00:02.342) 0:00:42.726 ***** 2025-10-08 15:40:26.663993 | orchestrator | changed: [testbed-manager] 2025-10-08 15:40:26.664004 | orchestrator | 2025-10-08 15:40:26.664015 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2025-10-08 15:40:26.664052 | orchestrator | Wednesday 08 October 2025 15:39:55 +0000 (0:00:01.685) 0:00:44.411 ***** 2025-10-08 15:40:26.664063 | orchestrator | changed: [testbed-manager] 2025-10-08 15:40:26.664074 | orchestrator | 2025-10-08 15:40:26.664085 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2025-10-08 15:40:26.664096 | orchestrator | Wednesday 08 October 2025 15:39:56 +0000 (0:00:00.824) 0:00:45.236 ***** 2025-10-08 15:40:26.664107 | orchestrator | ok: [testbed-manager] 2025-10-08 15:40:26.664118 | orchestrator | 2025-10-08 15:40:26.664128 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-08 15:40:26.664139 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-08 15:40:26.664150 | orchestrator | 2025-10-08 15:40:26.664161 | orchestrator | 2025-10-08 15:40:26.664172 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-08 15:40:26.664183 | orchestrator | Wednesday 08 October 2025 15:39:56 +0000 (0:00:00.579) 0:00:45.815 ***** 2025-10-08 15:40:26.664193 | orchestrator | =============================================================================== 2025-10-08 15:40:26.664204 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 30.66s 2025-10-08 15:40:26.664215 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 3.32s 2025-10-08 15:40:26.664225 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 2.34s 2025-10-08 15:40:26.664236 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 2.05s 2025-10-08 15:40:26.664247 | orchestrator | osism.services.openstackclient : Create required directories ------------ 2.01s 2025-10-08 15:40:26.664258 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 1.69s 2025-10-08 15:40:26.664268 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.82s 2025-10-08 15:40:26.664279 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 0.72s 2025-10-08 15:40:26.664290 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.58s 2025-10-08 15:40:26.664301 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.43s 2025-10-08 15:40:26.664311 | orchestrator | 2025-10-08 15:40:26.664322 | orchestrator | 2025-10-08 15:40:26.664333 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-10-08 15:40:26.664344 | orchestrator | 2025-10-08 15:40:26.664355 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-10-08 15:40:26.664365 | orchestrator | Wednesday 08 October 2025 15:39:12 +0000 (0:00:00.891) 0:00:00.891 ***** 2025-10-08 15:40:26.664376 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2025-10-08 15:40:26.664387 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2025-10-08 15:40:26.664397 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2025-10-08 15:40:26.664408 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2025-10-08 15:40:26.664419 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2025-10-08 15:40:26.664435 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2025-10-08 15:40:26.664446 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2025-10-08 15:40:26.664457 | orchestrator | 2025-10-08 15:40:26.664468 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2025-10-08 15:40:26.664478 | orchestrator | 2025-10-08 15:40:26.664489 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2025-10-08 15:40:26.664500 | orchestrator | Wednesday 08 October 2025 15:39:13 +0000 (0:00:00.828) 0:00:01.719 ***** 2025-10-08 15:40:26.664524 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-08 15:40:26.664543 | orchestrator | 2025-10-08 15:40:26.664554 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2025-10-08 15:40:26.664565 | orchestrator | Wednesday 08 October 2025 15:39:15 +0000 (0:00:02.139) 0:00:03.858 ***** 2025-10-08 15:40:26.664576 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:40:26.664586 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:40:26.664597 | orchestrator | ok: [testbed-manager] 2025-10-08 15:40:26.664608 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:40:26.664619 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:40:26.664635 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:40:26.664646 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:40:26.664657 | orchestrator | 2025-10-08 15:40:26.664668 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2025-10-08 15:40:26.664679 | orchestrator | Wednesday 08 October 2025 15:39:18 +0000 (0:00:02.900) 0:00:06.759 ***** 2025-10-08 15:40:26.664690 | orchestrator | ok: [testbed-manager] 2025-10-08 15:40:26.664701 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:40:26.664712 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:40:26.664723 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:40:26.664733 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:40:26.664744 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:40:26.664754 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:40:26.664765 | orchestrator | 2025-10-08 15:40:26.664776 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2025-10-08 15:40:26.664787 | orchestrator | Wednesday 08 October 2025 15:39:22 +0000 (0:00:03.325) 0:00:10.084 ***** 2025-10-08 15:40:26.664797 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:40:26.664808 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:40:26.664819 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:40:26.664830 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:40:26.664840 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:40:26.664851 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:40:26.664862 | orchestrator | changed: [testbed-manager] 2025-10-08 15:40:26.664872 | orchestrator | 2025-10-08 15:40:26.664883 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2025-10-08 15:40:26.664898 | orchestrator | Wednesday 08 October 2025 15:39:24 +0000 (0:00:02.717) 0:00:12.802 ***** 2025-10-08 15:40:26.664910 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:40:26.664920 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:40:26.664931 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:40:26.664942 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:40:26.664953 | orchestrator | changed: [testbed-manager] 2025-10-08 15:40:26.664963 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:40:26.664974 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:40:26.664985 | orchestrator | 2025-10-08 15:40:26.664996 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2025-10-08 15:40:26.665007 | orchestrator | Wednesday 08 October 2025 15:39:35 +0000 (0:00:10.763) 0:00:23.566 ***** 2025-10-08 15:40:26.665017 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:40:26.665081 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:40:26.665094 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:40:26.665105 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:40:26.665122 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:40:26.665133 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:40:26.665144 | orchestrator | changed: [testbed-manager] 2025-10-08 15:40:26.665154 | orchestrator | 2025-10-08 15:40:26.665166 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2025-10-08 15:40:26.665175 | orchestrator | Wednesday 08 October 2025 15:40:00 +0000 (0:00:24.831) 0:00:48.397 ***** 2025-10-08 15:40:26.665186 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-08 15:40:26.665198 | orchestrator | 2025-10-08 15:40:26.665207 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2025-10-08 15:40:26.665217 | orchestrator | Wednesday 08 October 2025 15:40:01 +0000 (0:00:01.604) 0:00:50.002 ***** 2025-10-08 15:40:26.665226 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2025-10-08 15:40:26.665237 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2025-10-08 15:40:26.665246 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2025-10-08 15:40:26.665256 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2025-10-08 15:40:26.665265 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2025-10-08 15:40:26.665275 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2025-10-08 15:40:26.665284 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2025-10-08 15:40:26.665294 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2025-10-08 15:40:26.665303 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2025-10-08 15:40:26.665313 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2025-10-08 15:40:26.665322 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2025-10-08 15:40:26.665332 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2025-10-08 15:40:26.665341 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2025-10-08 15:40:26.665351 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2025-10-08 15:40:26.665360 | orchestrator | 2025-10-08 15:40:26.665370 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2025-10-08 15:40:26.665380 | orchestrator | Wednesday 08 October 2025 15:40:07 +0000 (0:00:05.753) 0:00:55.756 ***** 2025-10-08 15:40:26.665389 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:40:26.665399 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:40:26.665408 | orchestrator | ok: [testbed-manager] 2025-10-08 15:40:26.665418 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:40:26.665427 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:40:26.665437 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:40:26.665446 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:40:26.665456 | orchestrator | 2025-10-08 15:40:26.665465 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2025-10-08 15:40:26.665475 | orchestrator | Wednesday 08 October 2025 15:40:09 +0000 (0:00:01.997) 0:00:57.753 ***** 2025-10-08 15:40:26.665485 | orchestrator | changed: [testbed-manager] 2025-10-08 15:40:26.665494 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:40:26.665504 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:40:26.665514 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:40:26.665523 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:40:26.665533 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:40:26.665542 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:40:26.665551 | orchestrator | 2025-10-08 15:40:26.665561 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2025-10-08 15:40:26.665577 | orchestrator | Wednesday 08 October 2025 15:40:11 +0000 (0:00:01.698) 0:00:59.452 ***** 2025-10-08 15:40:26.665587 | orchestrator | ok: [testbed-manager] 2025-10-08 15:40:26.665597 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:40:26.665607 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:40:26.665622 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:40:26.665632 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:40:26.665641 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:40:26.665651 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:40:26.665660 | orchestrator | 2025-10-08 15:40:26.665670 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2025-10-08 15:40:26.665679 | orchestrator | Wednesday 08 October 2025 15:40:13 +0000 (0:00:01.856) 0:01:01.309 ***** 2025-10-08 15:40:26.665689 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:40:26.665699 | orchestrator | ok: [testbed-manager] 2025-10-08 15:40:26.665708 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:40:26.665717 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:40:26.665727 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:40:26.665737 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:40:26.665746 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:40:26.665755 | orchestrator | 2025-10-08 15:40:26.665765 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2025-10-08 15:40:26.665775 | orchestrator | Wednesday 08 October 2025 15:40:15 +0000 (0:00:02.510) 0:01:03.819 ***** 2025-10-08 15:40:26.665784 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2025-10-08 15:40:26.665796 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-08 15:40:26.665806 | orchestrator | 2025-10-08 15:40:26.665816 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2025-10-08 15:40:26.665826 | orchestrator | Wednesday 08 October 2025 15:40:17 +0000 (0:00:01.766) 0:01:05.586 ***** 2025-10-08 15:40:26.665835 | orchestrator | changed: [testbed-manager] 2025-10-08 15:40:26.665845 | orchestrator | 2025-10-08 15:40:26.665855 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2025-10-08 15:40:26.665865 | orchestrator | Wednesday 08 October 2025 15:40:19 +0000 (0:00:02.339) 0:01:07.926 ***** 2025-10-08 15:40:26.665874 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:40:26.665884 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:40:26.665894 | orchestrator | changed: [testbed-manager] 2025-10-08 15:40:26.665903 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:40:26.665913 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:40:26.665923 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:40:26.665932 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:40:26.665942 | orchestrator | 2025-10-08 15:40:26.665951 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-08 15:40:26.665961 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-08 15:40:26.665971 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-08 15:40:26.665981 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-08 15:40:26.665991 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-08 15:40:26.666001 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-08 15:40:26.666096 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-08 15:40:26.666111 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-08 15:40:26.666121 | orchestrator | 2025-10-08 15:40:26.666131 | orchestrator | 2025-10-08 15:40:26.666141 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-08 15:40:26.666162 | orchestrator | Wednesday 08 October 2025 15:40:23 +0000 (0:00:03.755) 0:01:11.681 ***** 2025-10-08 15:40:26.666171 | orchestrator | =============================================================================== 2025-10-08 15:40:26.666181 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 24.83s 2025-10-08 15:40:26.666190 | orchestrator | osism.services.netdata : Add repository -------------------------------- 10.76s 2025-10-08 15:40:26.666200 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 5.75s 2025-10-08 15:40:26.666210 | orchestrator | osism.services.netdata : Restart service netdata ------------------------ 3.76s 2025-10-08 15:40:26.666219 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 3.33s 2025-10-08 15:40:26.666228 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 2.90s 2025-10-08 15:40:26.666238 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 2.72s 2025-10-08 15:40:26.666248 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 2.51s 2025-10-08 15:40:26.666257 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 2.34s 2025-10-08 15:40:26.666267 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 2.14s 2025-10-08 15:40:26.666276 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 2.00s 2025-10-08 15:40:26.666292 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.86s 2025-10-08 15:40:26.666302 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.77s 2025-10-08 15:40:26.666312 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.70s 2025-10-08 15:40:26.666321 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.60s 2025-10-08 15:40:26.666331 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.83s 2025-10-08 15:40:29.710857 | orchestrator | 2025-10-08 15:40:29 | INFO  | Task a15a1f05-1127-4caa-b7e2-f42d0efa0f77 is in state STARTED 2025-10-08 15:40:29.712170 | orchestrator | 2025-10-08 15:40:29 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:40:29.713310 | orchestrator | 2025-10-08 15:40:29 | INFO  | Task 8f07373b-e87c-4206-88d6-cf39a577abb2 is in state STARTED 2025-10-08 15:40:29.713344 | orchestrator | 2025-10-08 15:40:29 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:40:32.762305 | orchestrator | 2025-10-08 15:40:32 | INFO  | Task a15a1f05-1127-4caa-b7e2-f42d0efa0f77 is in state STARTED 2025-10-08 15:40:32.762424 | orchestrator | 2025-10-08 15:40:32 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:40:32.763060 | orchestrator | 2025-10-08 15:40:32 | INFO  | Task 8f07373b-e87c-4206-88d6-cf39a577abb2 is in state STARTED 2025-10-08 15:40:32.763087 | orchestrator | 2025-10-08 15:40:32 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:40:35.815471 | orchestrator | 2025-10-08 15:40:35 | INFO  | Task a15a1f05-1127-4caa-b7e2-f42d0efa0f77 is in state STARTED 2025-10-08 15:40:35.817273 | orchestrator | 2025-10-08 15:40:35 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:40:35.819566 | orchestrator | 2025-10-08 15:40:35 | INFO  | Task 8f07373b-e87c-4206-88d6-cf39a577abb2 is in state STARTED 2025-10-08 15:40:35.819842 | orchestrator | 2025-10-08 15:40:35 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:40:38.861651 | orchestrator | 2025-10-08 15:40:38 | INFO  | Task a15a1f05-1127-4caa-b7e2-f42d0efa0f77 is in state STARTED 2025-10-08 15:40:38.862333 | orchestrator | 2025-10-08 15:40:38 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:40:38.863338 | orchestrator | 2025-10-08 15:40:38 | INFO  | Task 8f07373b-e87c-4206-88d6-cf39a577abb2 is in state STARTED 2025-10-08 15:40:38.863362 | orchestrator | 2025-10-08 15:40:38 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:40:41.905444 | orchestrator | 2025-10-08 15:40:41 | INFO  | Task a15a1f05-1127-4caa-b7e2-f42d0efa0f77 is in state STARTED 2025-10-08 15:40:41.908000 | orchestrator | 2025-10-08 15:40:41 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:40:41.916764 | orchestrator | 2025-10-08 15:40:41 | INFO  | Task 8f07373b-e87c-4206-88d6-cf39a577abb2 is in state STARTED 2025-10-08 15:40:41.916793 | orchestrator | 2025-10-08 15:40:41 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:40:44.977653 | orchestrator | 2025-10-08 15:40:44 | INFO  | Task a15a1f05-1127-4caa-b7e2-f42d0efa0f77 is in state STARTED 2025-10-08 15:40:44.981387 | orchestrator | 2025-10-08 15:40:44 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:40:44.986126 | orchestrator | 2025-10-08 15:40:44 | INFO  | Task 8f07373b-e87c-4206-88d6-cf39a577abb2 is in state STARTED 2025-10-08 15:40:44.986167 | orchestrator | 2025-10-08 15:40:44 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:40:48.023946 | orchestrator | 2025-10-08 15:40:48 | INFO  | Task a15a1f05-1127-4caa-b7e2-f42d0efa0f77 is in state STARTED 2025-10-08 15:40:48.024186 | orchestrator | 2025-10-08 15:40:48 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:40:48.027910 | orchestrator | 2025-10-08 15:40:48 | INFO  | Task 8f07373b-e87c-4206-88d6-cf39a577abb2 is in state STARTED 2025-10-08 15:40:48.028753 | orchestrator | 2025-10-08 15:40:48 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:40:51.068292 | orchestrator | 2025-10-08 15:40:51 | INFO  | Task a15a1f05-1127-4caa-b7e2-f42d0efa0f77 is in state STARTED 2025-10-08 15:40:51.068947 | orchestrator | 2025-10-08 15:40:51 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:40:51.069659 | orchestrator | 2025-10-08 15:40:51 | INFO  | Task 8f07373b-e87c-4206-88d6-cf39a577abb2 is in state STARTED 2025-10-08 15:40:51.069686 | orchestrator | 2025-10-08 15:40:51 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:40:54.107419 | orchestrator | 2025-10-08 15:40:54 | INFO  | Task a15a1f05-1127-4caa-b7e2-f42d0efa0f77 is in state STARTED 2025-10-08 15:40:54.108625 | orchestrator | 2025-10-08 15:40:54 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:40:54.109940 | orchestrator | 2025-10-08 15:40:54 | INFO  | Task 8f07373b-e87c-4206-88d6-cf39a577abb2 is in state STARTED 2025-10-08 15:40:54.109969 | orchestrator | 2025-10-08 15:40:54 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:40:57.153787 | orchestrator | 2025-10-08 15:40:57 | INFO  | Task a15a1f05-1127-4caa-b7e2-f42d0efa0f77 is in state STARTED 2025-10-08 15:40:57.159589 | orchestrator | 2025-10-08 15:40:57 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:40:57.160224 | orchestrator | 2025-10-08 15:40:57 | INFO  | Task 8f07373b-e87c-4206-88d6-cf39a577abb2 is in state STARTED 2025-10-08 15:40:57.160463 | orchestrator | 2025-10-08 15:40:57 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:41:00.197252 | orchestrator | 2025-10-08 15:41:00 | INFO  | Task a15a1f05-1127-4caa-b7e2-f42d0efa0f77 is in state STARTED 2025-10-08 15:41:00.199389 | orchestrator | 2025-10-08 15:41:00 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:41:00.200744 | orchestrator | 2025-10-08 15:41:00 | INFO  | Task 8f07373b-e87c-4206-88d6-cf39a577abb2 is in state STARTED 2025-10-08 15:41:00.200798 | orchestrator | 2025-10-08 15:41:00 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:41:03.247294 | orchestrator | 2025-10-08 15:41:03 | INFO  | Task a15a1f05-1127-4caa-b7e2-f42d0efa0f77 is in state STARTED 2025-10-08 15:41:03.249409 | orchestrator | 2025-10-08 15:41:03 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:41:03.251977 | orchestrator | 2025-10-08 15:41:03 | INFO  | Task 8f07373b-e87c-4206-88d6-cf39a577abb2 is in state STARTED 2025-10-08 15:41:03.252007 | orchestrator | 2025-10-08 15:41:03 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:41:06.296219 | orchestrator | 2025-10-08 15:41:06 | INFO  | Task a15a1f05-1127-4caa-b7e2-f42d0efa0f77 is in state STARTED 2025-10-08 15:41:06.298270 | orchestrator | 2025-10-08 15:41:06 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:41:06.300698 | orchestrator | 2025-10-08 15:41:06 | INFO  | Task 8f07373b-e87c-4206-88d6-cf39a577abb2 is in state STARTED 2025-10-08 15:41:06.300761 | orchestrator | 2025-10-08 15:41:06 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:41:09.337197 | orchestrator | 2025-10-08 15:41:09 | INFO  | Task a15a1f05-1127-4caa-b7e2-f42d0efa0f77 is in state STARTED 2025-10-08 15:41:09.338466 | orchestrator | 2025-10-08 15:41:09 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:41:09.339626 | orchestrator | 2025-10-08 15:41:09 | INFO  | Task 8f07373b-e87c-4206-88d6-cf39a577abb2 is in state STARTED 2025-10-08 15:41:09.339648 | orchestrator | 2025-10-08 15:41:09 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:41:12.397379 | orchestrator | 2025-10-08 15:41:12 | INFO  | Task a15a1f05-1127-4caa-b7e2-f42d0efa0f77 is in state STARTED 2025-10-08 15:41:12.398394 | orchestrator | 2025-10-08 15:41:12 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:41:12.399378 | orchestrator | 2025-10-08 15:41:12 | INFO  | Task 8f07373b-e87c-4206-88d6-cf39a577abb2 is in state STARTED 2025-10-08 15:41:12.399493 | orchestrator | 2025-10-08 15:41:12 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:41:15.436416 | orchestrator | 2025-10-08 15:41:15 | INFO  | Task a15a1f05-1127-4caa-b7e2-f42d0efa0f77 is in state STARTED 2025-10-08 15:41:15.438191 | orchestrator | 2025-10-08 15:41:15 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:41:15.439008 | orchestrator | 2025-10-08 15:41:15 | INFO  | Task 8f07373b-e87c-4206-88d6-cf39a577abb2 is in state STARTED 2025-10-08 15:41:15.439029 | orchestrator | 2025-10-08 15:41:15 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:41:18.483769 | orchestrator | 2025-10-08 15:41:18 | INFO  | Task a15a1f05-1127-4caa-b7e2-f42d0efa0f77 is in state STARTED 2025-10-08 15:41:18.484511 | orchestrator | 2025-10-08 15:41:18 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:41:18.486373 | orchestrator | 2025-10-08 15:41:18 | INFO  | Task 8f07373b-e87c-4206-88d6-cf39a577abb2 is in state STARTED 2025-10-08 15:41:18.486400 | orchestrator | 2025-10-08 15:41:18 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:41:21.526699 | orchestrator | 2025-10-08 15:41:21 | INFO  | Task a15a1f05-1127-4caa-b7e2-f42d0efa0f77 is in state STARTED 2025-10-08 15:41:21.527167 | orchestrator | 2025-10-08 15:41:21 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:41:21.531969 | orchestrator | 2025-10-08 15:41:21 | INFO  | Task 8f07373b-e87c-4206-88d6-cf39a577abb2 is in state SUCCESS 2025-10-08 15:41:21.535241 | orchestrator | 2025-10-08 15:41:21.535302 | orchestrator | 2025-10-08 15:41:21.535336 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2025-10-08 15:41:21.535349 | orchestrator | 2025-10-08 15:41:21.535365 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2025-10-08 15:41:21.535377 | orchestrator | Wednesday 08 October 2025 15:39:29 +0000 (0:00:00.242) 0:00:00.242 ***** 2025-10-08 15:41:21.535388 | orchestrator | ok: [testbed-manager] 2025-10-08 15:41:21.535401 | orchestrator | 2025-10-08 15:41:21.535413 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2025-10-08 15:41:21.535424 | orchestrator | Wednesday 08 October 2025 15:39:30 +0000 (0:00:01.036) 0:00:01.279 ***** 2025-10-08 15:41:21.535435 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2025-10-08 15:41:21.535457 | orchestrator | 2025-10-08 15:41:21.535469 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2025-10-08 15:41:21.535480 | orchestrator | Wednesday 08 October 2025 15:39:31 +0000 (0:00:00.580) 0:00:01.862 ***** 2025-10-08 15:41:21.535491 | orchestrator | changed: [testbed-manager] 2025-10-08 15:41:21.535503 | orchestrator | 2025-10-08 15:41:21.535520 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2025-10-08 15:41:21.535532 | orchestrator | Wednesday 08 October 2025 15:39:33 +0000 (0:00:01.692) 0:00:03.555 ***** 2025-10-08 15:41:21.535551 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2025-10-08 15:41:21.535562 | orchestrator | ok: [testbed-manager] 2025-10-08 15:41:21.535573 | orchestrator | 2025-10-08 15:41:21.535584 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2025-10-08 15:41:21.535596 | orchestrator | Wednesday 08 October 2025 15:40:20 +0000 (0:00:47.070) 0:00:50.625 ***** 2025-10-08 15:41:21.535607 | orchestrator | changed: [testbed-manager] 2025-10-08 15:41:21.535618 | orchestrator | 2025-10-08 15:41:21.535629 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-08 15:41:21.535641 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-08 15:41:21.535654 | orchestrator | 2025-10-08 15:41:21.535665 | orchestrator | 2025-10-08 15:41:21.535676 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-08 15:41:21.535687 | orchestrator | Wednesday 08 October 2025 15:40:24 +0000 (0:00:04.352) 0:00:54.977 ***** 2025-10-08 15:41:21.535699 | orchestrator | =============================================================================== 2025-10-08 15:41:21.535710 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 47.07s 2025-10-08 15:41:21.535721 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 4.35s 2025-10-08 15:41:21.535732 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.69s 2025-10-08 15:41:21.535743 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 1.04s 2025-10-08 15:41:21.535754 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.58s 2025-10-08 15:41:21.535765 | orchestrator | 2025-10-08 15:41:21.535776 | orchestrator | 2025-10-08 15:41:21.535787 | orchestrator | PLAY [Apply role common] ******************************************************* 2025-10-08 15:41:21.535798 | orchestrator | 2025-10-08 15:41:21.535809 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-10-08 15:41:21.535820 | orchestrator | Wednesday 08 October 2025 15:39:02 +0000 (0:00:00.262) 0:00:00.262 ***** 2025-10-08 15:41:21.535833 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-08 15:41:21.535847 | orchestrator | 2025-10-08 15:41:21.535860 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2025-10-08 15:41:21.535873 | orchestrator | Wednesday 08 October 2025 15:39:04 +0000 (0:00:01.292) 0:00:01.554 ***** 2025-10-08 15:41:21.535885 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2025-10-08 15:41:21.535905 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2025-10-08 15:41:21.535918 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2025-10-08 15:41:21.535930 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2025-10-08 15:41:21.535942 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2025-10-08 15:41:21.535954 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2025-10-08 15:41:21.535967 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-10-08 15:41:21.535979 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2025-10-08 15:41:21.535991 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-10-08 15:41:21.536004 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-10-08 15:41:21.536016 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-10-08 15:41:21.536028 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-10-08 15:41:21.536062 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-10-08 15:41:21.536074 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-10-08 15:41:21.536087 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-10-08 15:41:21.536099 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-10-08 15:41:21.536124 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-10-08 15:41:21.536137 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-10-08 15:41:21.536149 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-10-08 15:41:21.536162 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-10-08 15:41:21.536174 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-10-08 15:41:21.536194 | orchestrator | 2025-10-08 15:41:21.536205 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-10-08 15:41:21.536216 | orchestrator | Wednesday 08 October 2025 15:39:08 +0000 (0:00:04.073) 0:00:05.628 ***** 2025-10-08 15:41:21.536231 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-08 15:41:21.536244 | orchestrator | 2025-10-08 15:41:21.536255 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2025-10-08 15:41:21.536266 | orchestrator | Wednesday 08 October 2025 15:39:09 +0000 (0:00:01.428) 0:00:07.057 ***** 2025-10-08 15:41:21.536281 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-10-08 15:41:21.536298 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-10-08 15:41:21.536316 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-10-08 15:41:21.536328 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-10-08 15:41:21.536339 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:41:21.536365 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:41:21.536382 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:41:21.536394 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-10-08 15:41:21.536406 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-10-08 15:41:21.536424 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:41:21.536448 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:41:21.536460 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-10-08 15:41:21.536471 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:41:21.536492 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:41:21.536504 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:41:21.536521 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:41:21.536532 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:41:21.536550 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:41:21.536561 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:41:21.536572 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:41:21.536584 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:41:21.536594 | orchestrator | 2025-10-08 15:41:21.536606 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2025-10-08 15:41:21.536617 | orchestrator | Wednesday 08 October 2025 15:39:14 +0000 (0:00:04.691) 0:00:11.748 ***** 2025-10-08 15:41:21.536635 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-10-08 15:41:21.536647 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-08 15:41:21.536665 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-08 15:41:21.536683 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-10-08 15:41:21.536694 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-08 15:41:21.536706 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-08 15:41:21.536717 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:41:21.536729 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-10-08 15:41:21.536741 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-08 15:41:21.536758 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-08 15:41:21.536770 | orchestrator | skipping: [testbed-manager] 2025-10-08 15:41:21.536786 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-10-08 15:41:21.536798 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-08 15:41:21.536815 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-08 15:41:21.536826 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:41:21.536838 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-10-08 15:41:21.536849 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-08 15:41:21.536861 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-08 15:41:21.536872 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:41:21.536884 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:41:21.536895 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-10-08 15:41:21.536913 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-08 15:41:21.536929 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-10-08 15:41:21.536951 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-08 15:41:21.536963 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-08 15:41:21.536974 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:41:21.536985 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-08 15:41:21.536997 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:41:21.537007 | orchestrator | 2025-10-08 15:41:21.537018 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2025-10-08 15:41:21.537030 | orchestrator | Wednesday 08 October 2025 15:39:16 +0000 (0:00:01.703) 0:00:13.452 ***** 2025-10-08 15:41:21.537091 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-10-08 15:41:21.537104 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-08 15:41:21.537122 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-08 15:41:21.537141 | orchestrator | skipping: [testbed-manager] 2025-10-08 15:41:21.537157 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-10-08 15:41:21.537169 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-08 15:41:21.537180 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-08 15:41:21.537192 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:41:21.537203 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-10-08 15:41:21.537214 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-08 15:41:21.537226 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-08 15:41:21.537237 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:41:21.537248 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-10-08 15:41:21.537266 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-08 15:41:21.537289 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-08 15:41:21.537301 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:41:21.537312 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-10-08 15:41:21.537323 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-08 15:41:21.537335 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-10-08 15:41:21.537346 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-08 15:41:21.537358 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-08 15:41:21.537382 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-08 15:41:21.537400 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:41:21.537411 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:41:21.537422 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-10-08 15:41:21.537437 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-08 15:41:21.537449 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-08 15:41:21.537460 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:41:21.537471 | orchestrator | 2025-10-08 15:41:21.537482 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2025-10-08 15:41:21.537493 | orchestrator | Wednesday 08 October 2025 15:39:18 +0000 (0:00:02.390) 0:00:15.843 ***** 2025-10-08 15:41:21.537504 | orchestrator | skipping: [testbed-manager] 2025-10-08 15:41:21.537515 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:41:21.537526 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:41:21.537537 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:41:21.537548 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:41:21.537559 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:41:21.537569 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:41:21.537580 | orchestrator | 2025-10-08 15:41:21.537591 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2025-10-08 15:41:21.537602 | orchestrator | Wednesday 08 October 2025 15:39:19 +0000 (0:00:01.088) 0:00:16.931 ***** 2025-10-08 15:41:21.537613 | orchestrator | skipping: [testbed-manager] 2025-10-08 15:41:21.537624 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:41:21.537634 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:41:21.537645 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:41:21.537656 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:41:21.537666 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:41:21.537677 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:41:21.537688 | orchestrator | 2025-10-08 15:41:21.537699 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2025-10-08 15:41:21.537710 | orchestrator | Wednesday 08 October 2025 15:39:20 +0000 (0:00:01.266) 0:00:18.198 ***** 2025-10-08 15:41:21.537721 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-10-08 15:41:21.537738 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-10-08 15:41:21.537760 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-10-08 15:41:21.537776 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:41:21.537788 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-10-08 15:41:21.537799 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:41:21.537811 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-10-08 15:41:21.537822 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:41:21.537838 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-10-08 15:41:21.537850 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:41:21.537868 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:41:21.537879 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:41:21.537891 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:41:21.537902 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-10-08 15:41:21.537914 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:41:21.537925 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:41:21.537947 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:41:21.537959 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:41:21.537977 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:41:21.537993 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:41:21.538005 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:41:21.538080 | orchestrator | 2025-10-08 15:41:21.538096 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2025-10-08 15:41:21.538108 | orchestrator | Wednesday 08 October 2025 15:39:28 +0000 (0:00:08.014) 0:00:26.213 ***** 2025-10-08 15:41:21.538120 | orchestrator | [WARNING]: Skipped 2025-10-08 15:41:21.538131 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2025-10-08 15:41:21.538142 | orchestrator | to this access issue: 2025-10-08 15:41:21.538153 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2025-10-08 15:41:21.538164 | orchestrator | directory 2025-10-08 15:41:21.538175 | orchestrator | ok: [testbed-manager -> localhost] 2025-10-08 15:41:21.538186 | orchestrator | 2025-10-08 15:41:21.538197 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2025-10-08 15:41:21.538207 | orchestrator | Wednesday 08 October 2025 15:39:30 +0000 (0:00:01.548) 0:00:27.761 ***** 2025-10-08 15:41:21.538219 | orchestrator | [WARNING]: Skipped 2025-10-08 15:41:21.538229 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2025-10-08 15:41:21.538247 | orchestrator | to this access issue: 2025-10-08 15:41:21.538259 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2025-10-08 15:41:21.538269 | orchestrator | directory 2025-10-08 15:41:21.538280 | orchestrator | ok: [testbed-manager -> localhost] 2025-10-08 15:41:21.538291 | orchestrator | 2025-10-08 15:41:21.538302 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2025-10-08 15:41:21.538313 | orchestrator | Wednesday 08 October 2025 15:39:31 +0000 (0:00:00.938) 0:00:28.700 ***** 2025-10-08 15:41:21.538323 | orchestrator | [WARNING]: Skipped 2025-10-08 15:41:21.538334 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2025-10-08 15:41:21.538345 | orchestrator | to this access issue: 2025-10-08 15:41:21.538356 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2025-10-08 15:41:21.538366 | orchestrator | directory 2025-10-08 15:41:21.538377 | orchestrator | ok: [testbed-manager -> localhost] 2025-10-08 15:41:21.538388 | orchestrator | 2025-10-08 15:41:21.538399 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2025-10-08 15:41:21.538410 | orchestrator | Wednesday 08 October 2025 15:39:32 +0000 (0:00:00.936) 0:00:29.637 ***** 2025-10-08 15:41:21.538420 | orchestrator | [WARNING]: Skipped 2025-10-08 15:41:21.538431 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2025-10-08 15:41:21.538442 | orchestrator | to this access issue: 2025-10-08 15:41:21.538453 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2025-10-08 15:41:21.538464 | orchestrator | directory 2025-10-08 15:41:21.538474 | orchestrator | ok: [testbed-manager -> localhost] 2025-10-08 15:41:21.538485 | orchestrator | 2025-10-08 15:41:21.538496 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2025-10-08 15:41:21.538507 | orchestrator | Wednesday 08 October 2025 15:39:33 +0000 (0:00:00.907) 0:00:30.544 ***** 2025-10-08 15:41:21.538518 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:41:21.538528 | orchestrator | changed: [testbed-manager] 2025-10-08 15:41:21.538539 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:41:21.538550 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:41:21.538561 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:41:21.538571 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:41:21.538582 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:41:21.538593 | orchestrator | 2025-10-08 15:41:21.538603 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2025-10-08 15:41:21.538614 | orchestrator | Wednesday 08 October 2025 15:39:37 +0000 (0:00:04.748) 0:00:35.293 ***** 2025-10-08 15:41:21.538625 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-10-08 15:41:21.538636 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-10-08 15:41:21.538647 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-10-08 15:41:21.538668 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-10-08 15:41:21.538680 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-10-08 15:41:21.538691 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-10-08 15:41:21.538702 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-10-08 15:41:21.538712 | orchestrator | 2025-10-08 15:41:21.538723 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2025-10-08 15:41:21.538734 | orchestrator | Wednesday 08 October 2025 15:39:41 +0000 (0:00:03.747) 0:00:39.041 ***** 2025-10-08 15:41:21.538745 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:41:21.538761 | orchestrator | changed: [testbed-manager] 2025-10-08 15:41:21.538772 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:41:21.538790 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:41:21.538801 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:41:21.538812 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:41:21.538823 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:41:21.538833 | orchestrator | 2025-10-08 15:41:21.538844 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2025-10-08 15:41:21.538855 | orchestrator | Wednesday 08 October 2025 15:39:44 +0000 (0:00:02.804) 0:00:41.845 ***** 2025-10-08 15:41:21.538866 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-10-08 15:41:21.538878 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-08 15:41:21.538890 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-10-08 15:41:21.538901 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-08 15:41:21.538913 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:41:21.538931 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:41:21.538943 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-10-08 15:41:21.538964 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-10-08 15:41:21.538975 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-08 15:41:21.538987 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-08 15:41:21.538998 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-10-08 15:41:21.539009 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-08 15:41:21.539021 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-10-08 15:41:21.539056 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-08 15:41:21.539079 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-10-08 15:41:21.539091 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-08 15:41:21.539103 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:41:21.539114 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:41:21.539126 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:41:21.539137 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:41:21.539148 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:41:21.539159 | orchestrator | 2025-10-08 15:41:21.539170 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2025-10-08 15:41:21.539181 | orchestrator | Wednesday 08 October 2025 15:39:47 +0000 (0:00:03.426) 0:00:45.271 ***** 2025-10-08 15:41:21.539197 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-10-08 15:41:21.539208 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-10-08 15:41:21.539219 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-10-08 15:41:21.539236 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-10-08 15:41:21.539248 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-10-08 15:41:21.539259 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-10-08 15:41:21.539270 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-10-08 15:41:21.539280 | orchestrator | 2025-10-08 15:41:21.539291 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2025-10-08 15:41:21.539302 | orchestrator | Wednesday 08 October 2025 15:39:50 +0000 (0:00:02.771) 0:00:48.043 ***** 2025-10-08 15:41:21.539313 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-10-08 15:41:21.539328 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-10-08 15:41:21.539339 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-10-08 15:41:21.539350 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-10-08 15:41:21.539361 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-10-08 15:41:21.539371 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-10-08 15:41:21.539382 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-10-08 15:41:21.539393 | orchestrator | 2025-10-08 15:41:21.539403 | orchestrator | TASK [common : Check common containers] **************************************** 2025-10-08 15:41:21.539414 | orchestrator | Wednesday 08 October 2025 15:39:53 +0000 (0:00:02.794) 0:00:50.837 ***** 2025-10-08 15:41:21.539425 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-10-08 15:41:21.539437 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-10-08 15:41:21.539448 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-10-08 15:41:21.539460 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-10-08 15:41:21.539477 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-10-08 15:41:21.539495 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-10-08 15:41:21.539511 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:41:21.539523 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-10-08 15:41:21.539534 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:41:21.539546 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:41:21.539557 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:41:21.539574 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:41:21.539593 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:41:21.539608 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:41:21.539620 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:41:21.539631 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:41:21.539643 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:41:21.539655 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:41:21.539676 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:41:21.539687 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:41:21.539698 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:41:21.539709 | orchestrator | 2025-10-08 15:41:21.539726 | orchestrator | TASK [common : Creating log volume] ******************************************** 2025-10-08 15:41:21.539737 | orchestrator | Wednesday 08 October 2025 15:39:56 +0000 (0:00:03.488) 0:00:54.325 ***** 2025-10-08 15:41:21.539748 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:41:21.539759 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:41:21.539770 | orchestrator | changed: [testbed-manager] 2025-10-08 15:41:21.539781 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:41:21.539791 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:41:21.539802 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:41:21.539813 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:41:21.539824 | orchestrator | 2025-10-08 15:41:21.539835 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2025-10-08 15:41:21.539845 | orchestrator | Wednesday 08 October 2025 15:40:00 +0000 (0:00:03.115) 0:00:57.441 ***** 2025-10-08 15:41:21.539856 | orchestrator | changed: [testbed-manager] 2025-10-08 15:41:21.539867 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:41:21.539878 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:41:21.539888 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:41:21.539899 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:41:21.539914 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:41:21.539924 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:41:21.539935 | orchestrator | 2025-10-08 15:41:21.539946 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-10-08 15:41:21.539957 | orchestrator | Wednesday 08 October 2025 15:40:01 +0000 (0:00:01.561) 0:00:59.002 ***** 2025-10-08 15:41:21.539968 | orchestrator | 2025-10-08 15:41:21.539979 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-10-08 15:41:21.539989 | orchestrator | Wednesday 08 October 2025 15:40:01 +0000 (0:00:00.076) 0:00:59.079 ***** 2025-10-08 15:41:21.540000 | orchestrator | 2025-10-08 15:41:21.540011 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-10-08 15:41:21.540022 | orchestrator | Wednesday 08 October 2025 15:40:01 +0000 (0:00:00.071) 0:00:59.150 ***** 2025-10-08 15:41:21.540032 | orchestrator | 2025-10-08 15:41:21.540060 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-10-08 15:41:21.540071 | orchestrator | Wednesday 08 October 2025 15:40:01 +0000 (0:00:00.249) 0:00:59.399 ***** 2025-10-08 15:41:21.540081 | orchestrator | 2025-10-08 15:41:21.540093 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-10-08 15:41:21.540103 | orchestrator | Wednesday 08 October 2025 15:40:02 +0000 (0:00:00.081) 0:00:59.481 ***** 2025-10-08 15:41:21.540121 | orchestrator | 2025-10-08 15:41:21.540131 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-10-08 15:41:21.540142 | orchestrator | Wednesday 08 October 2025 15:40:02 +0000 (0:00:00.073) 0:00:59.555 ***** 2025-10-08 15:41:21.540153 | orchestrator | 2025-10-08 15:41:21.540164 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-10-08 15:41:21.540174 | orchestrator | Wednesday 08 October 2025 15:40:02 +0000 (0:00:00.067) 0:00:59.623 ***** 2025-10-08 15:41:21.540185 | orchestrator | 2025-10-08 15:41:21.540196 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2025-10-08 15:41:21.540207 | orchestrator | Wednesday 08 October 2025 15:40:02 +0000 (0:00:00.096) 0:00:59.720 ***** 2025-10-08 15:41:21.540218 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:41:21.540228 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:41:21.540239 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:41:21.540250 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:41:21.540260 | orchestrator | changed: [testbed-manager] 2025-10-08 15:41:21.540271 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:41:21.540281 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:41:21.540292 | orchestrator | 2025-10-08 15:41:21.540303 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2025-10-08 15:41:21.540313 | orchestrator | Wednesday 08 October 2025 15:40:36 +0000 (0:00:34.092) 0:01:33.812 ***** 2025-10-08 15:41:21.540324 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:41:21.540335 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:41:21.540345 | orchestrator | changed: [testbed-manager] 2025-10-08 15:41:21.540356 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:41:21.540366 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:41:21.540377 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:41:21.540387 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:41:21.540398 | orchestrator | 2025-10-08 15:41:21.540409 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2025-10-08 15:41:21.540419 | orchestrator | Wednesday 08 October 2025 15:41:08 +0000 (0:00:31.782) 0:02:05.594 ***** 2025-10-08 15:41:21.540430 | orchestrator | ok: [testbed-manager] 2025-10-08 15:41:21.540441 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:41:21.540452 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:41:21.540462 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:41:21.540473 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:41:21.540484 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:41:21.540494 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:41:21.540505 | orchestrator | 2025-10-08 15:41:21.540516 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2025-10-08 15:41:21.540527 | orchestrator | Wednesday 08 October 2025 15:41:10 +0000 (0:00:01.987) 0:02:07.582 ***** 2025-10-08 15:41:21.540537 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:41:21.540548 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:41:21.540559 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:41:21.540570 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:41:21.540580 | orchestrator | changed: [testbed-manager] 2025-10-08 15:41:21.540591 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:41:21.540601 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:41:21.540612 | orchestrator | 2025-10-08 15:41:21.540623 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-08 15:41:21.540634 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-10-08 15:41:21.540645 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-10-08 15:41:21.540663 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-10-08 15:41:21.540675 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-10-08 15:41:21.540691 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-10-08 15:41:21.540702 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-10-08 15:41:21.540713 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-10-08 15:41:21.540724 | orchestrator | 2025-10-08 15:41:21.540734 | orchestrator | 2025-10-08 15:41:21.540745 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-08 15:41:21.540756 | orchestrator | Wednesday 08 October 2025 15:41:20 +0000 (0:00:09.915) 0:02:17.497 ***** 2025-10-08 15:41:21.540767 | orchestrator | =============================================================================== 2025-10-08 15:41:21.540777 | orchestrator | common : Restart fluentd container ------------------------------------- 34.09s 2025-10-08 15:41:21.540796 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 31.78s 2025-10-08 15:41:21.540807 | orchestrator | common : Restart cron container ----------------------------------------- 9.92s 2025-10-08 15:41:21.540818 | orchestrator | common : Copying over config.json files for services -------------------- 8.02s 2025-10-08 15:41:21.540829 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 4.75s 2025-10-08 15:41:21.540839 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 4.69s 2025-10-08 15:41:21.540850 | orchestrator | common : Ensuring config directories exist ------------------------------ 4.07s 2025-10-08 15:41:21.540861 | orchestrator | common : Copying over cron logrotate config file ------------------------ 3.75s 2025-10-08 15:41:21.540871 | orchestrator | common : Check common containers ---------------------------------------- 3.49s 2025-10-08 15:41:21.540882 | orchestrator | common : Ensuring config directories have correct owner and permission --- 3.43s 2025-10-08 15:41:21.540893 | orchestrator | common : Creating log volume -------------------------------------------- 3.12s 2025-10-08 15:41:21.540903 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 2.80s 2025-10-08 15:41:21.540914 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 2.79s 2025-10-08 15:41:21.540925 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 2.77s 2025-10-08 15:41:21.540935 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 2.39s 2025-10-08 15:41:21.540946 | orchestrator | common : Initializing toolbox container using normal user --------------- 1.99s 2025-10-08 15:41:21.540957 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 1.71s 2025-10-08 15:41:21.540968 | orchestrator | common : Link kolla_logs volume to /var/log/kolla ----------------------- 1.56s 2025-10-08 15:41:21.540978 | orchestrator | common : Find custom fluentd input config files ------------------------- 1.55s 2025-10-08 15:41:21.540989 | orchestrator | common : include_tasks -------------------------------------------------- 1.43s 2025-10-08 15:41:21.541000 | orchestrator | 2025-10-08 15:41:21 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:41:24.567859 | orchestrator | 2025-10-08 15:41:24 | INFO  | Task fccaece7-f16a-4b63-909c-fd8412342af3 is in state STARTED 2025-10-08 15:41:24.568000 | orchestrator | 2025-10-08 15:41:24 | INFO  | Task d212d2ba-a4b6-4c89-8ec5-2914099798a2 is in state STARTED 2025-10-08 15:41:24.568753 | orchestrator | 2025-10-08 15:41:24 | INFO  | Task bf8c0644-3629-4e4f-abf3-56502089195e is in state STARTED 2025-10-08 15:41:24.569582 | orchestrator | 2025-10-08 15:41:24 | INFO  | Task a404fc5e-865b-4d7e-860c-9e87fdc074d1 is in state STARTED 2025-10-08 15:41:24.570524 | orchestrator | 2025-10-08 15:41:24 | INFO  | Task a15a1f05-1127-4caa-b7e2-f42d0efa0f77 is in state STARTED 2025-10-08 15:41:24.573195 | orchestrator | 2025-10-08 15:41:24 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:41:24.573224 | orchestrator | 2025-10-08 15:41:24 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:41:27.591843 | orchestrator | 2025-10-08 15:41:27 | INFO  | Task fccaece7-f16a-4b63-909c-fd8412342af3 is in state STARTED 2025-10-08 15:41:27.592192 | orchestrator | 2025-10-08 15:41:27 | INFO  | Task d212d2ba-a4b6-4c89-8ec5-2914099798a2 is in state STARTED 2025-10-08 15:41:27.593201 | orchestrator | 2025-10-08 15:41:27 | INFO  | Task bf8c0644-3629-4e4f-abf3-56502089195e is in state STARTED 2025-10-08 15:41:27.593918 | orchestrator | 2025-10-08 15:41:27 | INFO  | Task a404fc5e-865b-4d7e-860c-9e87fdc074d1 is in state STARTED 2025-10-08 15:41:27.595552 | orchestrator | 2025-10-08 15:41:27 | INFO  | Task a15a1f05-1127-4caa-b7e2-f42d0efa0f77 is in state STARTED 2025-10-08 15:41:27.596525 | orchestrator | 2025-10-08 15:41:27 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:41:27.596620 | orchestrator | 2025-10-08 15:41:27 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:41:30.628912 | orchestrator | 2025-10-08 15:41:30 | INFO  | Task fccaece7-f16a-4b63-909c-fd8412342af3 is in state STARTED 2025-10-08 15:41:30.633725 | orchestrator | 2025-10-08 15:41:30 | INFO  | Task d212d2ba-a4b6-4c89-8ec5-2914099798a2 is in state STARTED 2025-10-08 15:41:30.634462 | orchestrator | 2025-10-08 15:41:30 | INFO  | Task bf8c0644-3629-4e4f-abf3-56502089195e is in state STARTED 2025-10-08 15:41:30.635281 | orchestrator | 2025-10-08 15:41:30 | INFO  | Task a404fc5e-865b-4d7e-860c-9e87fdc074d1 is in state STARTED 2025-10-08 15:41:30.635996 | orchestrator | 2025-10-08 15:41:30 | INFO  | Task a15a1f05-1127-4caa-b7e2-f42d0efa0f77 is in state STARTED 2025-10-08 15:41:30.636670 | orchestrator | 2025-10-08 15:41:30 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:41:30.636690 | orchestrator | 2025-10-08 15:41:30 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:41:33.681529 | orchestrator | 2025-10-08 15:41:33 | INFO  | Task fccaece7-f16a-4b63-909c-fd8412342af3 is in state STARTED 2025-10-08 15:41:33.682154 | orchestrator | 2025-10-08 15:41:33 | INFO  | Task d212d2ba-a4b6-4c89-8ec5-2914099798a2 is in state STARTED 2025-10-08 15:41:33.683083 | orchestrator | 2025-10-08 15:41:33 | INFO  | Task bf8c0644-3629-4e4f-abf3-56502089195e is in state STARTED 2025-10-08 15:41:33.683967 | orchestrator | 2025-10-08 15:41:33 | INFO  | Task a404fc5e-865b-4d7e-860c-9e87fdc074d1 is in state STARTED 2025-10-08 15:41:33.686952 | orchestrator | 2025-10-08 15:41:33 | INFO  | Task a15a1f05-1127-4caa-b7e2-f42d0efa0f77 is in state STARTED 2025-10-08 15:41:33.687820 | orchestrator | 2025-10-08 15:41:33 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:41:33.687841 | orchestrator | 2025-10-08 15:41:33 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:41:36.724375 | orchestrator | 2025-10-08 15:41:36 | INFO  | Task fccaece7-f16a-4b63-909c-fd8412342af3 is in state STARTED 2025-10-08 15:41:36.728939 | orchestrator | 2025-10-08 15:41:36 | INFO  | Task e53a9322-0e67-4e51-805e-3cd7852832cc is in state STARTED 2025-10-08 15:41:36.728970 | orchestrator | 2025-10-08 15:41:36 | INFO  | Task d212d2ba-a4b6-4c89-8ec5-2914099798a2 is in state STARTED 2025-10-08 15:41:36.728982 | orchestrator | 2025-10-08 15:41:36 | INFO  | Task bf8c0644-3629-4e4f-abf3-56502089195e is in state SUCCESS 2025-10-08 15:41:36.728994 | orchestrator | 2025-10-08 15:41:36 | INFO  | Task a404fc5e-865b-4d7e-860c-9e87fdc074d1 is in state STARTED 2025-10-08 15:41:36.729035 | orchestrator | 2025-10-08 15:41:36 | INFO  | Task a15a1f05-1127-4caa-b7e2-f42d0efa0f77 is in state STARTED 2025-10-08 15:41:36.729072 | orchestrator | 2025-10-08 15:41:36 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:41:36.729084 | orchestrator | 2025-10-08 15:41:36 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:41:39.813840 | orchestrator | 2025-10-08 15:41:39 | INFO  | Task fccaece7-f16a-4b63-909c-fd8412342af3 is in state STARTED 2025-10-08 15:41:39.814139 | orchestrator | 2025-10-08 15:41:39 | INFO  | Task e53a9322-0e67-4e51-805e-3cd7852832cc is in state STARTED 2025-10-08 15:41:39.814933 | orchestrator | 2025-10-08 15:41:39 | INFO  | Task d212d2ba-a4b6-4c89-8ec5-2914099798a2 is in state STARTED 2025-10-08 15:41:39.815650 | orchestrator | 2025-10-08 15:41:39 | INFO  | Task a404fc5e-865b-4d7e-860c-9e87fdc074d1 is in state STARTED 2025-10-08 15:41:39.818322 | orchestrator | 2025-10-08 15:41:39 | INFO  | Task a15a1f05-1127-4caa-b7e2-f42d0efa0f77 is in state STARTED 2025-10-08 15:41:39.819175 | orchestrator | 2025-10-08 15:41:39 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:41:39.819213 | orchestrator | 2025-10-08 15:41:39 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:41:42.853097 | orchestrator | 2025-10-08 15:41:42 | INFO  | Task fccaece7-f16a-4b63-909c-fd8412342af3 is in state STARTED 2025-10-08 15:41:42.853346 | orchestrator | 2025-10-08 15:41:42 | INFO  | Task e53a9322-0e67-4e51-805e-3cd7852832cc is in state STARTED 2025-10-08 15:41:42.854123 | orchestrator | 2025-10-08 15:41:42 | INFO  | Task d212d2ba-a4b6-4c89-8ec5-2914099798a2 is in state STARTED 2025-10-08 15:41:42.854726 | orchestrator | 2025-10-08 15:41:42 | INFO  | Task a404fc5e-865b-4d7e-860c-9e87fdc074d1 is in state STARTED 2025-10-08 15:41:42.855345 | orchestrator | 2025-10-08 15:41:42 | INFO  | Task a15a1f05-1127-4caa-b7e2-f42d0efa0f77 is in state STARTED 2025-10-08 15:41:42.856228 | orchestrator | 2025-10-08 15:41:42 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:41:42.856243 | orchestrator | 2025-10-08 15:41:42 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:41:45.890457 | orchestrator | 2025-10-08 15:41:45 | INFO  | Task fccaece7-f16a-4b63-909c-fd8412342af3 is in state STARTED 2025-10-08 15:41:45.891984 | orchestrator | 2025-10-08 15:41:45 | INFO  | Task e53a9322-0e67-4e51-805e-3cd7852832cc is in state STARTED 2025-10-08 15:41:45.892013 | orchestrator | 2025-10-08 15:41:45 | INFO  | Task d212d2ba-a4b6-4c89-8ec5-2914099798a2 is in state STARTED 2025-10-08 15:41:45.892321 | orchestrator | 2025-10-08 15:41:45 | INFO  | Task a404fc5e-865b-4d7e-860c-9e87fdc074d1 is in state STARTED 2025-10-08 15:41:45.893259 | orchestrator | 2025-10-08 15:41:45 | INFO  | Task a15a1f05-1127-4caa-b7e2-f42d0efa0f77 is in state STARTED 2025-10-08 15:41:45.894204 | orchestrator | 2025-10-08 15:41:45 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:41:45.894478 | orchestrator | 2025-10-08 15:41:45 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:41:48.954151 | orchestrator | 2025-10-08 15:41:48 | INFO  | Task fccaece7-f16a-4b63-909c-fd8412342af3 is in state STARTED 2025-10-08 15:41:48.954250 | orchestrator | 2025-10-08 15:41:48 | INFO  | Task e53a9322-0e67-4e51-805e-3cd7852832cc is in state STARTED 2025-10-08 15:41:48.954265 | orchestrator | 2025-10-08 15:41:48 | INFO  | Task d212d2ba-a4b6-4c89-8ec5-2914099798a2 is in state STARTED 2025-10-08 15:41:48.954277 | orchestrator | 2025-10-08 15:41:48 | INFO  | Task a404fc5e-865b-4d7e-860c-9e87fdc074d1 is in state STARTED 2025-10-08 15:41:48.955247 | orchestrator | 2025-10-08 15:41:48 | INFO  | Task a15a1f05-1127-4caa-b7e2-f42d0efa0f77 is in state STARTED 2025-10-08 15:41:48.956035 | orchestrator | 2025-10-08 15:41:48 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:41:48.956158 | orchestrator | 2025-10-08 15:41:48 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:41:51.991796 | orchestrator | 2025-10-08 15:41:51 | INFO  | Task fccaece7-f16a-4b63-909c-fd8412342af3 is in state STARTED 2025-10-08 15:41:51.992389 | orchestrator | 2025-10-08 15:41:51 | INFO  | Task e53a9322-0e67-4e51-805e-3cd7852832cc is in state STARTED 2025-10-08 15:41:51.993931 | orchestrator | 2025-10-08 15:41:51 | INFO  | Task d212d2ba-a4b6-4c89-8ec5-2914099798a2 is in state STARTED 2025-10-08 15:41:51.994303 | orchestrator | 2025-10-08 15:41:51.994397 | orchestrator | 2025-10-08 15:41:51.994411 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-10-08 15:41:51.994424 | orchestrator | 2025-10-08 15:41:51.994435 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-10-08 15:41:51.994446 | orchestrator | Wednesday 08 October 2025 15:41:25 +0000 (0:00:00.332) 0:00:00.332 ***** 2025-10-08 15:41:51.994458 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:41:51.994470 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:41:51.994481 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:41:51.994492 | orchestrator | 2025-10-08 15:41:51.994503 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-10-08 15:41:51.994514 | orchestrator | Wednesday 08 October 2025 15:41:25 +0000 (0:00:00.336) 0:00:00.669 ***** 2025-10-08 15:41:51.994526 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2025-10-08 15:41:51.994537 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2025-10-08 15:41:51.994548 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2025-10-08 15:41:51.994585 | orchestrator | 2025-10-08 15:41:51.994598 | orchestrator | PLAY [Apply role memcached] **************************************************** 2025-10-08 15:41:51.994610 | orchestrator | 2025-10-08 15:41:51.994621 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2025-10-08 15:41:51.994632 | orchestrator | Wednesday 08 October 2025 15:41:26 +0000 (0:00:00.384) 0:00:01.054 ***** 2025-10-08 15:41:51.994643 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-08 15:41:51.994655 | orchestrator | 2025-10-08 15:41:51.994713 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2025-10-08 15:41:51.994725 | orchestrator | Wednesday 08 October 2025 15:41:26 +0000 (0:00:00.398) 0:00:01.452 ***** 2025-10-08 15:41:51.994737 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-10-08 15:41:51.994748 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-10-08 15:41:51.994759 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-10-08 15:41:51.994770 | orchestrator | 2025-10-08 15:41:51.994781 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2025-10-08 15:41:51.994792 | orchestrator | Wednesday 08 October 2025 15:41:27 +0000 (0:00:00.743) 0:00:02.196 ***** 2025-10-08 15:41:51.994802 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-10-08 15:41:51.994814 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-10-08 15:41:51.994825 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-10-08 15:41:51.994860 | orchestrator | 2025-10-08 15:41:51.994872 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2025-10-08 15:41:51.994883 | orchestrator | Wednesday 08 October 2025 15:41:29 +0000 (0:00:01.694) 0:00:03.890 ***** 2025-10-08 15:41:51.994894 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:41:51.994905 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:41:51.994916 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:41:51.994928 | orchestrator | 2025-10-08 15:41:51.994939 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2025-10-08 15:41:51.994980 | orchestrator | Wednesday 08 October 2025 15:41:30 +0000 (0:00:01.828) 0:00:05.718 ***** 2025-10-08 15:41:51.994993 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:41:51.995005 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:41:51.995017 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:41:51.995029 | orchestrator | 2025-10-08 15:41:51.995064 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-08 15:41:51.995078 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-08 15:41:51.995108 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-08 15:41:51.995119 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-08 15:41:51.995130 | orchestrator | 2025-10-08 15:41:51.995141 | orchestrator | 2025-10-08 15:41:51.995152 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-08 15:41:51.995163 | orchestrator | Wednesday 08 October 2025 15:41:34 +0000 (0:00:03.430) 0:00:09.149 ***** 2025-10-08 15:41:51.995174 | orchestrator | =============================================================================== 2025-10-08 15:41:51.995184 | orchestrator | memcached : Restart memcached container --------------------------------- 3.43s 2025-10-08 15:41:51.995195 | orchestrator | memcached : Check memcached container ----------------------------------- 1.83s 2025-10-08 15:41:51.995206 | orchestrator | memcached : Copying over config.json files for services ----------------- 1.69s 2025-10-08 15:41:51.995217 | orchestrator | memcached : Ensuring config directories exist --------------------------- 0.74s 2025-10-08 15:41:51.995227 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.40s 2025-10-08 15:41:51.995238 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.38s 2025-10-08 15:41:51.995249 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.34s 2025-10-08 15:41:51.995259 | orchestrator | 2025-10-08 15:41:51.995270 | orchestrator | 2025-10-08 15:41:51 | INFO  | Task a404fc5e-865b-4d7e-860c-9e87fdc074d1 is in state SUCCESS 2025-10-08 15:41:51.995450 | orchestrator | 2025-10-08 15:41:51.995465 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-10-08 15:41:51.995476 | orchestrator | 2025-10-08 15:41:51.995487 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-10-08 15:41:51.995498 | orchestrator | Wednesday 08 October 2025 15:41:25 +0000 (0:00:00.257) 0:00:00.257 ***** 2025-10-08 15:41:51.995509 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:41:51.995520 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:41:51.995531 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:41:51.995542 | orchestrator | 2025-10-08 15:41:51.995553 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-10-08 15:41:51.995563 | orchestrator | Wednesday 08 October 2025 15:41:25 +0000 (0:00:00.387) 0:00:00.644 ***** 2025-10-08 15:41:51.995574 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2025-10-08 15:41:51.995585 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2025-10-08 15:41:51.995596 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2025-10-08 15:41:51.995607 | orchestrator | 2025-10-08 15:41:51.995618 | orchestrator | PLAY [Apply role redis] ******************************************************** 2025-10-08 15:41:51.995629 | orchestrator | 2025-10-08 15:41:51.995677 | orchestrator | TASK [redis : include_tasks] *************************************************** 2025-10-08 15:41:51.995688 | orchestrator | Wednesday 08 October 2025 15:41:25 +0000 (0:00:00.502) 0:00:01.147 ***** 2025-10-08 15:41:51.995699 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-08 15:41:51.995737 | orchestrator | 2025-10-08 15:41:51.995749 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2025-10-08 15:41:51.995769 | orchestrator | Wednesday 08 October 2025 15:41:26 +0000 (0:00:00.509) 0:00:01.657 ***** 2025-10-08 15:41:51.995784 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-10-08 15:41:51.995803 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-10-08 15:41:51.995815 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-10-08 15:41:51.995833 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-10-08 15:41:51.995858 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-10-08 15:41:51.995870 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-10-08 15:41:51.995881 | orchestrator | 2025-10-08 15:41:51.995899 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2025-10-08 15:41:51.995910 | orchestrator | Wednesday 08 October 2025 15:41:27 +0000 (0:00:01.287) 0:00:02.944 ***** 2025-10-08 15:41:51.995922 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-10-08 15:41:51.995934 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-10-08 15:41:51.995946 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-10-08 15:41:51.995963 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-10-08 15:41:51.995975 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-10-08 15:41:51.995994 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-10-08 15:41:51.996013 | orchestrator | 2025-10-08 15:41:51.996024 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2025-10-08 15:41:51.996035 | orchestrator | Wednesday 08 October 2025 15:41:30 +0000 (0:00:02.623) 0:00:05.567 ***** 2025-10-08 15:41:51.996068 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-10-08 15:41:51.996082 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-10-08 15:41:51.996095 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-10-08 15:41:51.996113 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-10-08 15:41:51.996126 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-10-08 15:41:51.996147 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-10-08 15:41:51.996166 | orchestrator | 2025-10-08 15:41:51.996178 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2025-10-08 15:41:51.996190 | orchestrator | Wednesday 08 October 2025 15:41:33 +0000 (0:00:02.950) 0:00:08.517 ***** 2025-10-08 15:41:51.996203 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-10-08 15:41:51.996215 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-10-08 15:41:51.996228 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-10-08 15:41:51.996241 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-10-08 15:41:51.996258 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-10-08 15:41:51.996280 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-10-08 15:41:51.996299 | orchestrator | 2025-10-08 15:41:51.996312 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-10-08 15:41:51.996323 | orchestrator | Wednesday 08 October 2025 15:41:35 +0000 (0:00:01.942) 0:00:10.460 ***** 2025-10-08 15:41:51.996336 | orchestrator | 2025-10-08 15:41:51.996348 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-10-08 15:41:51.996359 | orchestrator | Wednesday 08 October 2025 15:41:35 +0000 (0:00:00.186) 0:00:10.646 ***** 2025-10-08 15:41:51.996372 | orchestrator | 2025-10-08 15:41:51.996384 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-10-08 15:41:51.996396 | orchestrator | Wednesday 08 October 2025 15:41:35 +0000 (0:00:00.125) 0:00:10.771 ***** 2025-10-08 15:41:51.996408 | orchestrator | 2025-10-08 15:41:51.996420 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2025-10-08 15:41:51.996431 | orchestrator | Wednesday 08 October 2025 15:41:35 +0000 (0:00:00.122) 0:00:10.894 ***** 2025-10-08 15:41:51.996442 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:41:51.996453 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:41:51.996464 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:41:51.996475 | orchestrator | 2025-10-08 15:41:51.996486 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2025-10-08 15:41:51.996497 | orchestrator | Wednesday 08 October 2025 15:41:45 +0000 (0:00:09.480) 0:00:20.374 ***** 2025-10-08 15:41:51.996508 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:41:51.996519 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:41:51.996529 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:41:51.996540 | orchestrator | 2025-10-08 15:41:51.996551 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-08 15:41:51.996562 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-08 15:41:51.996573 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-08 15:41:51.996585 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-08 15:41:51.996596 | orchestrator | 2025-10-08 15:41:51.996607 | orchestrator | 2025-10-08 15:41:51.996618 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-08 15:41:51.996629 | orchestrator | Wednesday 08 October 2025 15:41:50 +0000 (0:00:05.046) 0:00:25.420 ***** 2025-10-08 15:41:51.996640 | orchestrator | =============================================================================== 2025-10-08 15:41:51.996651 | orchestrator | redis : Restart redis container ----------------------------------------- 9.48s 2025-10-08 15:41:51.996662 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 5.05s 2025-10-08 15:41:51.996673 | orchestrator | redis : Copying over redis config files --------------------------------- 2.95s 2025-10-08 15:41:51.996684 | orchestrator | redis : Copying over default config.json files -------------------------- 2.62s 2025-10-08 15:41:51.996694 | orchestrator | redis : Check redis containers ------------------------------------------ 1.94s 2025-10-08 15:41:51.996705 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.29s 2025-10-08 15:41:51.996716 | orchestrator | redis : include_tasks --------------------------------------------------- 0.51s 2025-10-08 15:41:51.996726 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.50s 2025-10-08 15:41:51.996737 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.43s 2025-10-08 15:41:51.996748 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.39s 2025-10-08 15:41:51.996764 | orchestrator | 2025-10-08 15:41:51 | INFO  | Task a15a1f05-1127-4caa-b7e2-f42d0efa0f77 is in state STARTED 2025-10-08 15:41:51.996850 | orchestrator | 2025-10-08 15:41:51 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:41:51.996872 | orchestrator | 2025-10-08 15:41:51 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:41:55.058936 | orchestrator | 2025-10-08 15:41:55 | INFO  | Task fccaece7-f16a-4b63-909c-fd8412342af3 is in state STARTED 2025-10-08 15:41:55.059035 | orchestrator | 2025-10-08 15:41:55 | INFO  | Task e53a9322-0e67-4e51-805e-3cd7852832cc is in state STARTED 2025-10-08 15:41:55.059085 | orchestrator | 2025-10-08 15:41:55 | INFO  | Task d212d2ba-a4b6-4c89-8ec5-2914099798a2 is in state STARTED 2025-10-08 15:41:55.059099 | orchestrator | 2025-10-08 15:41:55 | INFO  | Task a15a1f05-1127-4caa-b7e2-f42d0efa0f77 is in state STARTED 2025-10-08 15:41:55.059109 | orchestrator | 2025-10-08 15:41:55 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:41:55.059121 | orchestrator | 2025-10-08 15:41:55 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:41:58.121686 | orchestrator | 2025-10-08 15:41:58 | INFO  | Task fccaece7-f16a-4b63-909c-fd8412342af3 is in state STARTED 2025-10-08 15:41:58.134171 | orchestrator | 2025-10-08 15:41:58 | INFO  | Task e53a9322-0e67-4e51-805e-3cd7852832cc is in state STARTED 2025-10-08 15:41:58.136657 | orchestrator | 2025-10-08 15:41:58 | INFO  | Task d212d2ba-a4b6-4c89-8ec5-2914099798a2 is in state STARTED 2025-10-08 15:41:58.137429 | orchestrator | 2025-10-08 15:41:58 | INFO  | Task a15a1f05-1127-4caa-b7e2-f42d0efa0f77 is in state STARTED 2025-10-08 15:41:58.138387 | orchestrator | 2025-10-08 15:41:58 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:41:58.138412 | orchestrator | 2025-10-08 15:41:58 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:42:01.214406 | orchestrator | 2025-10-08 15:42:01 | INFO  | Task fccaece7-f16a-4b63-909c-fd8412342af3 is in state STARTED 2025-10-08 15:42:01.214501 | orchestrator | 2025-10-08 15:42:01 | INFO  | Task e53a9322-0e67-4e51-805e-3cd7852832cc is in state STARTED 2025-10-08 15:42:01.214514 | orchestrator | 2025-10-08 15:42:01 | INFO  | Task d212d2ba-a4b6-4c89-8ec5-2914099798a2 is in state STARTED 2025-10-08 15:42:01.214526 | orchestrator | 2025-10-08 15:42:01 | INFO  | Task a15a1f05-1127-4caa-b7e2-f42d0efa0f77 is in state STARTED 2025-10-08 15:42:01.214537 | orchestrator | 2025-10-08 15:42:01 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:42:01.214549 | orchestrator | 2025-10-08 15:42:01 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:42:04.211606 | orchestrator | 2025-10-08 15:42:04 | INFO  | Task fccaece7-f16a-4b63-909c-fd8412342af3 is in state STARTED 2025-10-08 15:42:04.211714 | orchestrator | 2025-10-08 15:42:04 | INFO  | Task e53a9322-0e67-4e51-805e-3cd7852832cc is in state STARTED 2025-10-08 15:42:04.212885 | orchestrator | 2025-10-08 15:42:04 | INFO  | Task d212d2ba-a4b6-4c89-8ec5-2914099798a2 is in state STARTED 2025-10-08 15:42:04.213194 | orchestrator | 2025-10-08 15:42:04 | INFO  | Task a15a1f05-1127-4caa-b7e2-f42d0efa0f77 is in state STARTED 2025-10-08 15:42:04.213817 | orchestrator | 2025-10-08 15:42:04 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:42:04.213899 | orchestrator | 2025-10-08 15:42:04 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:42:07.244350 | orchestrator | 2025-10-08 15:42:07 | INFO  | Task fccaece7-f16a-4b63-909c-fd8412342af3 is in state STARTED 2025-10-08 15:42:07.244453 | orchestrator | 2025-10-08 15:42:07 | INFO  | Task e53a9322-0e67-4e51-805e-3cd7852832cc is in state STARTED 2025-10-08 15:42:07.244468 | orchestrator | 2025-10-08 15:42:07 | INFO  | Task d212d2ba-a4b6-4c89-8ec5-2914099798a2 is in state STARTED 2025-10-08 15:42:07.244745 | orchestrator | 2025-10-08 15:42:07 | INFO  | Task a15a1f05-1127-4caa-b7e2-f42d0efa0f77 is in state STARTED 2025-10-08 15:42:07.245260 | orchestrator | 2025-10-08 15:42:07 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:42:07.245283 | orchestrator | 2025-10-08 15:42:07 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:42:10.298407 | orchestrator | 2025-10-08 15:42:10 | INFO  | Task fccaece7-f16a-4b63-909c-fd8412342af3 is in state STARTED 2025-10-08 15:42:10.298658 | orchestrator | 2025-10-08 15:42:10 | INFO  | Task e53a9322-0e67-4e51-805e-3cd7852832cc is in state STARTED 2025-10-08 15:42:10.301497 | orchestrator | 2025-10-08 15:42:10 | INFO  | Task d212d2ba-a4b6-4c89-8ec5-2914099798a2 is in state STARTED 2025-10-08 15:42:10.302422 | orchestrator | 2025-10-08 15:42:10 | INFO  | Task a15a1f05-1127-4caa-b7e2-f42d0efa0f77 is in state STARTED 2025-10-08 15:42:10.303130 | orchestrator | 2025-10-08 15:42:10 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:42:10.303237 | orchestrator | 2025-10-08 15:42:10 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:42:13.409333 | orchestrator | 2025-10-08 15:42:13 | INFO  | Task fccaece7-f16a-4b63-909c-fd8412342af3 is in state STARTED 2025-10-08 15:42:13.409546 | orchestrator | 2025-10-08 15:42:13 | INFO  | Task e53a9322-0e67-4e51-805e-3cd7852832cc is in state STARTED 2025-10-08 15:42:13.410447 | orchestrator | 2025-10-08 15:42:13 | INFO  | Task d212d2ba-a4b6-4c89-8ec5-2914099798a2 is in state STARTED 2025-10-08 15:42:13.411143 | orchestrator | 2025-10-08 15:42:13 | INFO  | Task a15a1f05-1127-4caa-b7e2-f42d0efa0f77 is in state STARTED 2025-10-08 15:42:13.411764 | orchestrator | 2025-10-08 15:42:13 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:42:13.411784 | orchestrator | 2025-10-08 15:42:13 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:42:16.536327 | orchestrator | 2025-10-08 15:42:16 | INFO  | Task fccaece7-f16a-4b63-909c-fd8412342af3 is in state STARTED 2025-10-08 15:42:16.538257 | orchestrator | 2025-10-08 15:42:16 | INFO  | Task e53a9322-0e67-4e51-805e-3cd7852832cc is in state STARTED 2025-10-08 15:42:16.538291 | orchestrator | 2025-10-08 15:42:16 | INFO  | Task d212d2ba-a4b6-4c89-8ec5-2914099798a2 is in state STARTED 2025-10-08 15:42:16.539419 | orchestrator | 2025-10-08 15:42:16 | INFO  | Task a15a1f05-1127-4caa-b7e2-f42d0efa0f77 is in state STARTED 2025-10-08 15:42:16.540396 | orchestrator | 2025-10-08 15:42:16 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:42:16.542760 | orchestrator | 2025-10-08 15:42:16 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:42:19.748788 | orchestrator | 2025-10-08 15:42:19 | INFO  | Task fccaece7-f16a-4b63-909c-fd8412342af3 is in state STARTED 2025-10-08 15:42:19.753547 | orchestrator | 2025-10-08 15:42:19 | INFO  | Task e53a9322-0e67-4e51-805e-3cd7852832cc is in state STARTED 2025-10-08 15:42:19.756431 | orchestrator | 2025-10-08 15:42:19 | INFO  | Task d212d2ba-a4b6-4c89-8ec5-2914099798a2 is in state STARTED 2025-10-08 15:42:19.757009 | orchestrator | 2025-10-08 15:42:19 | INFO  | Task a15a1f05-1127-4caa-b7e2-f42d0efa0f77 is in state STARTED 2025-10-08 15:42:19.757987 | orchestrator | 2025-10-08 15:42:19 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:42:19.758138 | orchestrator | 2025-10-08 15:42:19 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:42:22.807844 | orchestrator | 2025-10-08 15:42:22 | INFO  | Task fccaece7-f16a-4b63-909c-fd8412342af3 is in state STARTED 2025-10-08 15:42:22.809745 | orchestrator | 2025-10-08 15:42:22 | INFO  | Task e53a9322-0e67-4e51-805e-3cd7852832cc is in state STARTED 2025-10-08 15:42:22.811013 | orchestrator | 2025-10-08 15:42:22 | INFO  | Task d212d2ba-a4b6-4c89-8ec5-2914099798a2 is in state STARTED 2025-10-08 15:42:22.812312 | orchestrator | 2025-10-08 15:42:22 | INFO  | Task a15a1f05-1127-4caa-b7e2-f42d0efa0f77 is in state STARTED 2025-10-08 15:42:22.813383 | orchestrator | 2025-10-08 15:42:22 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:42:22.813407 | orchestrator | 2025-10-08 15:42:22 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:42:25.927834 | orchestrator | 2025-10-08 15:42:25 | INFO  | Task fccaece7-f16a-4b63-909c-fd8412342af3 is in state STARTED 2025-10-08 15:42:25.927931 | orchestrator | 2025-10-08 15:42:25 | INFO  | Task e53a9322-0e67-4e51-805e-3cd7852832cc is in state STARTED 2025-10-08 15:42:25.927946 | orchestrator | 2025-10-08 15:42:25 | INFO  | Task d212d2ba-a4b6-4c89-8ec5-2914099798a2 is in state STARTED 2025-10-08 15:42:25.928456 | orchestrator | 2025-10-08 15:42:25 | INFO  | Task a15a1f05-1127-4caa-b7e2-f42d0efa0f77 is in state STARTED 2025-10-08 15:42:25.932011 | orchestrator | 2025-10-08 15:42:25 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:42:25.932037 | orchestrator | 2025-10-08 15:42:25 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:42:29.228049 | orchestrator | 2025-10-08 15:42:29 | INFO  | Task fccaece7-f16a-4b63-909c-fd8412342af3 is in state STARTED 2025-10-08 15:42:29.230736 | orchestrator | 2025-10-08 15:42:29 | INFO  | Task e53a9322-0e67-4e51-805e-3cd7852832cc is in state STARTED 2025-10-08 15:42:29.231829 | orchestrator | 2025-10-08 15:42:29 | INFO  | Task d212d2ba-a4b6-4c89-8ec5-2914099798a2 is in state STARTED 2025-10-08 15:42:29.232751 | orchestrator | 2025-10-08 15:42:29 | INFO  | Task a15a1f05-1127-4caa-b7e2-f42d0efa0f77 is in state STARTED 2025-10-08 15:42:29.233502 | orchestrator | 2025-10-08 15:42:29 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:42:29.233527 | orchestrator | 2025-10-08 15:42:29 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:42:32.277839 | orchestrator | 2025-10-08 15:42:32 | INFO  | Task fccaece7-f16a-4b63-909c-fd8412342af3 is in state STARTED 2025-10-08 15:42:32.278709 | orchestrator | 2025-10-08 15:42:32 | INFO  | Task e53a9322-0e67-4e51-805e-3cd7852832cc is in state STARTED 2025-10-08 15:42:32.279550 | orchestrator | 2025-10-08 15:42:32 | INFO  | Task d212d2ba-a4b6-4c89-8ec5-2914099798a2 is in state SUCCESS 2025-10-08 15:42:32.281948 | orchestrator | 2025-10-08 15:42:32.281990 | orchestrator | 2025-10-08 15:42:32.282002 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-10-08 15:42:32.282014 | orchestrator | 2025-10-08 15:42:32.282126 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-10-08 15:42:32.282138 | orchestrator | Wednesday 08 October 2025 15:41:25 +0000 (0:00:00.284) 0:00:00.284 ***** 2025-10-08 15:42:32.282149 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:42:32.282162 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:42:32.282173 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:42:32.282183 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:42:32.282194 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:42:32.282204 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:42:32.282215 | orchestrator | 2025-10-08 15:42:32.282226 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-10-08 15:42:32.282236 | orchestrator | Wednesday 08 October 2025 15:41:26 +0000 (0:00:00.824) 0:00:01.108 ***** 2025-10-08 15:42:32.282247 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-10-08 15:42:32.282276 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-10-08 15:42:32.282286 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-10-08 15:42:32.282297 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-10-08 15:42:32.282308 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-10-08 15:42:32.282318 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-10-08 15:42:32.282329 | orchestrator | 2025-10-08 15:42:32.282339 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2025-10-08 15:42:32.282350 | orchestrator | 2025-10-08 15:42:32.282361 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2025-10-08 15:42:32.282371 | orchestrator | Wednesday 08 October 2025 15:41:26 +0000 (0:00:00.673) 0:00:01.781 ***** 2025-10-08 15:42:32.282383 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-08 15:42:32.282396 | orchestrator | 2025-10-08 15:42:32.282407 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-10-08 15:42:32.282418 | orchestrator | Wednesday 08 October 2025 15:41:27 +0000 (0:00:01.097) 0:00:02.879 ***** 2025-10-08 15:42:32.282429 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-10-08 15:42:32.282441 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-10-08 15:42:32.282452 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-10-08 15:42:32.282463 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-10-08 15:42:32.282474 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-10-08 15:42:32.282485 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-10-08 15:42:32.282495 | orchestrator | 2025-10-08 15:42:32.282506 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-10-08 15:42:32.282517 | orchestrator | Wednesday 08 October 2025 15:41:29 +0000 (0:00:01.371) 0:00:04.251 ***** 2025-10-08 15:42:32.282528 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-10-08 15:42:32.282540 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-10-08 15:42:32.282550 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-10-08 15:42:32.282562 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-10-08 15:42:32.282575 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-10-08 15:42:32.282587 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-10-08 15:42:32.282599 | orchestrator | 2025-10-08 15:42:32.282612 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-10-08 15:42:32.282624 | orchestrator | Wednesday 08 October 2025 15:41:30 +0000 (0:00:01.643) 0:00:05.895 ***** 2025-10-08 15:42:32.282636 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2025-10-08 15:42:32.282648 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:42:32.282661 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2025-10-08 15:42:32.282674 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:42:32.282686 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2025-10-08 15:42:32.282697 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:42:32.282709 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2025-10-08 15:42:32.282721 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:42:32.282732 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2025-10-08 15:42:32.282745 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:42:32.282757 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2025-10-08 15:42:32.282769 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:42:32.282781 | orchestrator | 2025-10-08 15:42:32.282801 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2025-10-08 15:42:32.282814 | orchestrator | Wednesday 08 October 2025 15:41:32 +0000 (0:00:01.753) 0:00:07.648 ***** 2025-10-08 15:42:32.282833 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:42:32.282845 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:42:32.282857 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:42:32.282869 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:42:32.282881 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:42:32.282892 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:42:32.282905 | orchestrator | 2025-10-08 15:42:32.282917 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2025-10-08 15:42:32.282928 | orchestrator | Wednesday 08 October 2025 15:41:33 +0000 (0:00:01.131) 0:00:08.779 ***** 2025-10-08 15:42:32.282957 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-10-08 15:42:32.282976 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-10-08 15:42:32.282989 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-10-08 15:42:32.283001 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-10-08 15:42:32.283012 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-10-08 15:42:32.283036 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-10-08 15:42:32.283071 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-10-08 15:42:32.283084 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-10-08 15:42:32.283096 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-10-08 15:42:32.283107 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-10-08 15:42:32.283123 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-10-08 15:42:32.283148 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-10-08 15:42:32.283161 | orchestrator | 2025-10-08 15:42:32.283172 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2025-10-08 15:42:32.283183 | orchestrator | Wednesday 08 October 2025 15:41:35 +0000 (0:00:02.007) 0:00:10.787 ***** 2025-10-08 15:42:32.283194 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-10-08 15:42:32.283206 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-10-08 15:42:32.283218 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-10-08 15:42:32.283229 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-10-08 15:42:32.283252 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-10-08 15:42:32.283279 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-10-08 15:42:32.283291 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-10-08 15:42:32.283303 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-10-08 15:42:32.283314 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-10-08 15:42:32.283325 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-10-08 15:42:32.283347 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-10-08 15:42:32.283367 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-10-08 15:42:32.283379 | orchestrator | 2025-10-08 15:42:32.283390 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2025-10-08 15:42:32.283401 | orchestrator | Wednesday 08 October 2025 15:41:40 +0000 (0:00:04.909) 0:00:15.697 ***** 2025-10-08 15:42:32.283412 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:42:32.283423 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:42:32.283434 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:42:32.283444 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:42:32.283455 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:42:32.283466 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:42:32.283477 | orchestrator | 2025-10-08 15:42:32.283488 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2025-10-08 15:42:32.283499 | orchestrator | Wednesday 08 October 2025 15:41:41 +0000 (0:00:01.240) 0:00:16.937 ***** 2025-10-08 15:42:32.283510 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-10-08 15:42:32.283521 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-10-08 15:42:32.283540 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-10-08 15:42:32.283555 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-10-08 15:42:32.283573 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-10-08 15:42:32.283585 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-10-08 15:42:32.283596 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-10-08 15:42:32.283616 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-10-08 15:42:32.283627 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-10-08 15:42:32.283647 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-10-08 15:42:32.283665 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-10-08 15:42:32.283677 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-10-08 15:42:32.283688 | orchestrator | 2025-10-08 15:42:32.283699 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-10-08 15:42:32.283710 | orchestrator | Wednesday 08 October 2025 15:41:44 +0000 (0:00:02.710) 0:00:19.647 ***** 2025-10-08 15:42:32.283721 | orchestrator | 2025-10-08 15:42:32.283732 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-10-08 15:42:32.283742 | orchestrator | Wednesday 08 October 2025 15:41:44 +0000 (0:00:00.230) 0:00:19.878 ***** 2025-10-08 15:42:32.283760 | orchestrator | 2025-10-08 15:42:32.283771 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-10-08 15:42:32.283781 | orchestrator | Wednesday 08 October 2025 15:41:44 +0000 (0:00:00.110) 0:00:19.988 ***** 2025-10-08 15:42:32.283792 | orchestrator | 2025-10-08 15:42:32.283803 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-10-08 15:42:32.283814 | orchestrator | Wednesday 08 October 2025 15:41:45 +0000 (0:00:00.103) 0:00:20.092 ***** 2025-10-08 15:42:32.283824 | orchestrator | 2025-10-08 15:42:32.283835 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-10-08 15:42:32.283846 | orchestrator | Wednesday 08 October 2025 15:41:45 +0000 (0:00:00.146) 0:00:20.239 ***** 2025-10-08 15:42:32.283857 | orchestrator | 2025-10-08 15:42:32.283867 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-10-08 15:42:32.283878 | orchestrator | Wednesday 08 October 2025 15:41:45 +0000 (0:00:00.295) 0:00:20.534 ***** 2025-10-08 15:42:32.283889 | orchestrator | 2025-10-08 15:42:32.283900 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2025-10-08 15:42:32.283910 | orchestrator | Wednesday 08 October 2025 15:41:45 +0000 (0:00:00.445) 0:00:20.980 ***** 2025-10-08 15:42:32.283921 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:42:32.283932 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:42:32.283943 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:42:32.283954 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:42:32.283965 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:42:32.283975 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:42:32.283986 | orchestrator | 2025-10-08 15:42:32.283997 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2025-10-08 15:42:32.284008 | orchestrator | Wednesday 08 October 2025 15:41:54 +0000 (0:00:08.370) 0:00:29.351 ***** 2025-10-08 15:42:32.284019 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:42:32.284030 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:42:32.284041 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:42:32.284067 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:42:32.284078 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:42:32.284089 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:42:32.284100 | orchestrator | 2025-10-08 15:42:32.284110 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-10-08 15:42:32.284121 | orchestrator | Wednesday 08 October 2025 15:41:56 +0000 (0:00:01.906) 0:00:31.257 ***** 2025-10-08 15:42:32.284132 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:42:32.284143 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:42:32.284154 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:42:32.284165 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:42:32.284176 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:42:32.284187 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:42:32.284197 | orchestrator | 2025-10-08 15:42:32.284213 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2025-10-08 15:42:32.284224 | orchestrator | Wednesday 08 October 2025 15:42:05 +0000 (0:00:09.342) 0:00:40.600 ***** 2025-10-08 15:42:32.284235 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2025-10-08 15:42:32.284246 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2025-10-08 15:42:32.284257 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2025-10-08 15:42:32.284268 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2025-10-08 15:42:32.284279 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2025-10-08 15:42:32.284296 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2025-10-08 15:42:32.284308 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2025-10-08 15:42:32.284325 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2025-10-08 15:42:32.284336 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2025-10-08 15:42:32.284347 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2025-10-08 15:42:32.284357 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2025-10-08 15:42:32.284368 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2025-10-08 15:42:32.284379 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-10-08 15:42:32.284390 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-10-08 15:42:32.284400 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-10-08 15:42:32.284411 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-10-08 15:42:32.284422 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-10-08 15:42:32.284433 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-10-08 15:42:32.284443 | orchestrator | 2025-10-08 15:42:32.284454 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2025-10-08 15:42:32.284465 | orchestrator | Wednesday 08 October 2025 15:42:13 +0000 (0:00:07.643) 0:00:48.243 ***** 2025-10-08 15:42:32.284476 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2025-10-08 15:42:32.284487 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:42:32.284498 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2025-10-08 15:42:32.284509 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:42:32.284520 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2025-10-08 15:42:32.284531 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:42:32.284542 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2025-10-08 15:42:32.284553 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2025-10-08 15:42:32.284564 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2025-10-08 15:42:32.284575 | orchestrator | 2025-10-08 15:42:32.284585 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2025-10-08 15:42:32.284596 | orchestrator | Wednesday 08 October 2025 15:42:15 +0000 (0:00:02.509) 0:00:50.752 ***** 2025-10-08 15:42:32.284607 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2025-10-08 15:42:32.284618 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:42:32.284629 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2025-10-08 15:42:32.284640 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:42:32.284651 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2025-10-08 15:42:32.284662 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:42:32.284673 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2025-10-08 15:42:32.284684 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2025-10-08 15:42:32.284695 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2025-10-08 15:42:32.284706 | orchestrator | 2025-10-08 15:42:32.284717 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-10-08 15:42:32.284727 | orchestrator | Wednesday 08 October 2025 15:42:20 +0000 (0:00:04.547) 0:00:55.300 ***** 2025-10-08 15:42:32.284738 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:42:32.284749 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:42:32.284766 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:42:32.284777 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:42:32.284787 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:42:32.284798 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:42:32.284809 | orchestrator | 2025-10-08 15:42:32.284820 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-08 15:42:32.284835 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-10-08 15:42:32.284847 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-10-08 15:42:32.284858 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-10-08 15:42:32.284869 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-10-08 15:42:32.284880 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-10-08 15:42:32.284897 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-10-08 15:42:32.284908 | orchestrator | 2025-10-08 15:42:32.284919 | orchestrator | 2025-10-08 15:42:32.284930 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-08 15:42:32.284941 | orchestrator | Wednesday 08 October 2025 15:42:30 +0000 (0:00:10.272) 0:01:05.573 ***** 2025-10-08 15:42:32.284952 | orchestrator | =============================================================================== 2025-10-08 15:42:32.284963 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 19.62s 2025-10-08 15:42:32.284973 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------- 8.37s 2025-10-08 15:42:32.284984 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 7.64s 2025-10-08 15:42:32.284995 | orchestrator | openvswitch : Copying over config.json files for services --------------- 4.91s 2025-10-08 15:42:32.285006 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 4.55s 2025-10-08 15:42:32.285016 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 2.71s 2025-10-08 15:42:32.285027 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.51s 2025-10-08 15:42:32.285038 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 2.01s 2025-10-08 15:42:32.285048 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.91s 2025-10-08 15:42:32.285088 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.75s 2025-10-08 15:42:32.285100 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 1.64s 2025-10-08 15:42:32.285110 | orchestrator | module-load : Load modules ---------------------------------------------- 1.37s 2025-10-08 15:42:32.285121 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.33s 2025-10-08 15:42:32.285131 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.24s 2025-10-08 15:42:32.285142 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 1.13s 2025-10-08 15:42:32.285153 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.10s 2025-10-08 15:42:32.285163 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.82s 2025-10-08 15:42:32.285174 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.67s 2025-10-08 15:42:32.285185 | orchestrator | 2025-10-08 15:42:32 | INFO  | Task a15a1f05-1127-4caa-b7e2-f42d0efa0f77 is in state STARTED 2025-10-08 15:42:32.285202 | orchestrator | 2025-10-08 15:42:32 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:42:32.285213 | orchestrator | 2025-10-08 15:42:32 | INFO  | Task 41436c97-5f8c-4e16-b6f7-e26cf48fe2a9 is in state STARTED 2025-10-08 15:42:32.285224 | orchestrator | 2025-10-08 15:42:32 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:42:35.312986 | orchestrator | 2025-10-08 15:42:35 | INFO  | Task fccaece7-f16a-4b63-909c-fd8412342af3 is in state STARTED 2025-10-08 15:42:35.316219 | orchestrator | 2025-10-08 15:42:35 | INFO  | Task e53a9322-0e67-4e51-805e-3cd7852832cc is in state STARTED 2025-10-08 15:42:35.317598 | orchestrator | 2025-10-08 15:42:35 | INFO  | Task a15a1f05-1127-4caa-b7e2-f42d0efa0f77 is in state SUCCESS 2025-10-08 15:42:35.318788 | orchestrator | 2025-10-08 15:42:35.318821 | orchestrator | 2025-10-08 15:42:35.318834 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2025-10-08 15:42:35.318846 | orchestrator | 2025-10-08 15:42:35.318858 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2025-10-08 15:42:35.318870 | orchestrator | Wednesday 08 October 2025 15:39:03 +0000 (0:00:00.184) 0:00:00.184 ***** 2025-10-08 15:42:35.318882 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:42:35.318895 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:42:35.318906 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:42:35.318936 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:42:35.318947 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:42:35.318958 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:42:35.318969 | orchestrator | 2025-10-08 15:42:35.318980 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2025-10-08 15:42:35.318992 | orchestrator | Wednesday 08 October 2025 15:39:03 +0000 (0:00:00.728) 0:00:00.912 ***** 2025-10-08 15:42:35.319003 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:42:35.319015 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:42:35.319026 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:42:35.319095 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:42:35.319109 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:42:35.319120 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:42:35.319132 | orchestrator | 2025-10-08 15:42:35.319143 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2025-10-08 15:42:35.319155 | orchestrator | Wednesday 08 October 2025 15:39:04 +0000 (0:00:00.683) 0:00:01.596 ***** 2025-10-08 15:42:35.319166 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:42:35.319177 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:42:35.319188 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:42:35.319199 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:42:35.319210 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:42:35.319221 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:42:35.319232 | orchestrator | 2025-10-08 15:42:35.319243 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2025-10-08 15:42:35.319254 | orchestrator | Wednesday 08 October 2025 15:39:05 +0000 (0:00:00.741) 0:00:02.338 ***** 2025-10-08 15:42:35.319265 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:42:35.319277 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:42:35.319287 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:42:35.319319 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:42:35.319330 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:42:35.319341 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:42:35.319352 | orchestrator | 2025-10-08 15:42:35.319363 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2025-10-08 15:42:35.319375 | orchestrator | Wednesday 08 October 2025 15:39:07 +0000 (0:00:02.633) 0:00:04.972 ***** 2025-10-08 15:42:35.319386 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:42:35.319397 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:42:35.319409 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:42:35.319421 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:42:35.319455 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:42:35.319468 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:42:35.319480 | orchestrator | 2025-10-08 15:42:35.319492 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2025-10-08 15:42:35.319504 | orchestrator | Wednesday 08 October 2025 15:39:09 +0000 (0:00:01.545) 0:00:06.518 ***** 2025-10-08 15:42:35.319516 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:42:35.319528 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:42:35.319540 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:42:35.319552 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:42:35.319564 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:42:35.319576 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:42:35.319587 | orchestrator | 2025-10-08 15:42:35.319599 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2025-10-08 15:42:35.319611 | orchestrator | Wednesday 08 October 2025 15:39:11 +0000 (0:00:02.026) 0:00:08.544 ***** 2025-10-08 15:42:35.319623 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:42:35.319635 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:42:35.319647 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:42:35.319659 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:42:35.319671 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:42:35.319682 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:42:35.319695 | orchestrator | 2025-10-08 15:42:35.319707 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2025-10-08 15:42:35.319719 | orchestrator | Wednesday 08 October 2025 15:39:12 +0000 (0:00:01.053) 0:00:09.598 ***** 2025-10-08 15:42:35.319731 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:42:35.319743 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:42:35.319755 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:42:35.319766 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:42:35.319777 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:42:35.319788 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:42:35.319799 | orchestrator | 2025-10-08 15:42:35.319811 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2025-10-08 15:42:35.319822 | orchestrator | Wednesday 08 October 2025 15:39:13 +0000 (0:00:00.777) 0:00:10.375 ***** 2025-10-08 15:42:35.319833 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2025-10-08 15:42:35.319844 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-10-08 15:42:35.319855 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:42:35.319867 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2025-10-08 15:42:35.319878 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-10-08 15:42:35.319889 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:42:35.319900 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-10-08 15:42:35.319911 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-10-08 15:42:35.319922 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:42:35.319933 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-10-08 15:42:35.319957 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-10-08 15:42:35.319968 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2025-10-08 15:42:35.319979 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-10-08 15:42:35.319990 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:42:35.320001 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:42:35.320012 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-10-08 15:42:35.320023 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-10-08 15:42:35.320034 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:42:35.320086 | orchestrator | 2025-10-08 15:42:35.320099 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2025-10-08 15:42:35.320110 | orchestrator | Wednesday 08 October 2025 15:39:14 +0000 (0:00:00.727) 0:00:11.102 ***** 2025-10-08 15:42:35.320121 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:42:35.320138 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:42:35.320150 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:42:35.320161 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:42:35.320172 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:42:35.320183 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:42:35.320194 | orchestrator | 2025-10-08 15:42:35.320205 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2025-10-08 15:42:35.320218 | orchestrator | Wednesday 08 October 2025 15:39:15 +0000 (0:00:01.484) 0:00:12.587 ***** 2025-10-08 15:42:35.320229 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:42:35.320240 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:42:35.320251 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:42:35.320262 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:42:35.320274 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:42:35.320285 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:42:35.320296 | orchestrator | 2025-10-08 15:42:35.320307 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2025-10-08 15:42:35.320318 | orchestrator | Wednesday 08 October 2025 15:39:16 +0000 (0:00:00.989) 0:00:13.576 ***** 2025-10-08 15:42:35.320329 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:42:35.320340 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:42:35.320351 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:42:35.320362 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:42:35.320373 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:42:35.320384 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:42:35.320395 | orchestrator | 2025-10-08 15:42:35.320406 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2025-10-08 15:42:35.320417 | orchestrator | Wednesday 08 October 2025 15:39:21 +0000 (0:00:05.488) 0:00:19.065 ***** 2025-10-08 15:42:35.320428 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:42:35.320439 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:42:35.320450 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:42:35.320461 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:42:35.320472 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:42:35.320483 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:42:35.320494 | orchestrator | 2025-10-08 15:42:35.320505 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2025-10-08 15:42:35.320516 | orchestrator | Wednesday 08 October 2025 15:39:23 +0000 (0:00:01.621) 0:00:20.686 ***** 2025-10-08 15:42:35.320527 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:42:35.320538 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:42:35.320549 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:42:35.320560 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:42:35.320571 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:42:35.320582 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:42:35.320593 | orchestrator | 2025-10-08 15:42:35.320605 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2025-10-08 15:42:35.320618 | orchestrator | Wednesday 08 October 2025 15:39:26 +0000 (0:00:02.442) 0:00:23.128 ***** 2025-10-08 15:42:35.320629 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:42:35.320640 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:42:35.320651 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:42:35.320662 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:42:35.320673 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:42:35.320684 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:42:35.320695 | orchestrator | 2025-10-08 15:42:35.320706 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2025-10-08 15:42:35.320717 | orchestrator | Wednesday 08 October 2025 15:39:27 +0000 (0:00:01.033) 0:00:24.162 ***** 2025-10-08 15:42:35.320735 | orchestrator | changed: [testbed-node-3] => (item=rancher) 2025-10-08 15:42:35.320747 | orchestrator | changed: [testbed-node-4] => (item=rancher) 2025-10-08 15:42:35.320758 | orchestrator | changed: [testbed-node-5] => (item=rancher) 2025-10-08 15:42:35.320769 | orchestrator | changed: [testbed-node-3] => (item=rancher/k3s) 2025-10-08 15:42:35.320780 | orchestrator | changed: [testbed-node-4] => (item=rancher/k3s) 2025-10-08 15:42:35.320791 | orchestrator | changed: [testbed-node-0] => (item=rancher) 2025-10-08 15:42:35.320802 | orchestrator | changed: [testbed-node-5] => (item=rancher/k3s) 2025-10-08 15:42:35.320813 | orchestrator | changed: [testbed-node-1] => (item=rancher) 2025-10-08 15:42:35.320824 | orchestrator | changed: [testbed-node-0] => (item=rancher/k3s) 2025-10-08 15:42:35.320835 | orchestrator | changed: [testbed-node-2] => (item=rancher) 2025-10-08 15:42:35.320846 | orchestrator | changed: [testbed-node-1] => (item=rancher/k3s) 2025-10-08 15:42:35.320857 | orchestrator | changed: [testbed-node-2] => (item=rancher/k3s) 2025-10-08 15:42:35.320868 | orchestrator | 2025-10-08 15:42:35.320879 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2025-10-08 15:42:35.320890 | orchestrator | Wednesday 08 October 2025 15:39:29 +0000 (0:00:02.380) 0:00:26.543 ***** 2025-10-08 15:42:35.320901 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:42:35.320912 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:42:35.320923 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:42:35.320934 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:42:35.320945 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:42:35.320956 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:42:35.320967 | orchestrator | 2025-10-08 15:42:35.320985 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2025-10-08 15:42:35.320997 | orchestrator | 2025-10-08 15:42:35.321008 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2025-10-08 15:42:35.321019 | orchestrator | Wednesday 08 October 2025 15:39:31 +0000 (0:00:01.654) 0:00:28.197 ***** 2025-10-08 15:42:35.321030 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:42:35.321041 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:42:35.321052 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:42:35.321079 | orchestrator | 2025-10-08 15:42:35.321090 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2025-10-08 15:42:35.321101 | orchestrator | Wednesday 08 October 2025 15:39:32 +0000 (0:00:01.231) 0:00:29.428 ***** 2025-10-08 15:42:35.321112 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:42:35.321122 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:42:35.321133 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:42:35.321144 | orchestrator | 2025-10-08 15:42:35.321154 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2025-10-08 15:42:35.321170 | orchestrator | Wednesday 08 October 2025 15:39:33 +0000 (0:00:01.407) 0:00:30.835 ***** 2025-10-08 15:42:35.321182 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:42:35.321193 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:42:35.321203 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:42:35.321214 | orchestrator | 2025-10-08 15:42:35.321225 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2025-10-08 15:42:35.321236 | orchestrator | Wednesday 08 October 2025 15:39:34 +0000 (0:00:01.216) 0:00:32.052 ***** 2025-10-08 15:42:35.321246 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:42:35.321257 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:42:35.321268 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:42:35.321279 | orchestrator | 2025-10-08 15:42:35.321290 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2025-10-08 15:42:35.321300 | orchestrator | Wednesday 08 October 2025 15:39:35 +0000 (0:00:00.908) 0:00:32.961 ***** 2025-10-08 15:42:35.321311 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:42:35.321322 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:42:35.321333 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:42:35.321351 | orchestrator | 2025-10-08 15:42:35.321362 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2025-10-08 15:42:35.321373 | orchestrator | Wednesday 08 October 2025 15:39:36 +0000 (0:00:00.470) 0:00:33.432 ***** 2025-10-08 15:42:35.321384 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:42:35.321394 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:42:35.321405 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:42:35.321416 | orchestrator | 2025-10-08 15:42:35.321427 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2025-10-08 15:42:35.321438 | orchestrator | Wednesday 08 October 2025 15:39:37 +0000 (0:00:00.946) 0:00:34.379 ***** 2025-10-08 15:42:35.321449 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:42:35.321460 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:42:35.321470 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:42:35.321481 | orchestrator | 2025-10-08 15:42:35.321492 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2025-10-08 15:42:35.321503 | orchestrator | Wednesday 08 October 2025 15:39:38 +0000 (0:00:01.439) 0:00:35.819 ***** 2025-10-08 15:42:35.321514 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-08 15:42:35.321525 | orchestrator | 2025-10-08 15:42:35.321536 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2025-10-08 15:42:35.321546 | orchestrator | Wednesday 08 October 2025 15:39:40 +0000 (0:00:01.567) 0:00:37.386 ***** 2025-10-08 15:42:35.321557 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:42:35.321568 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:42:35.321579 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:42:35.321590 | orchestrator | 2025-10-08 15:42:35.321600 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2025-10-08 15:42:35.321611 | orchestrator | Wednesday 08 October 2025 15:39:42 +0000 (0:00:01.980) 0:00:39.367 ***** 2025-10-08 15:42:35.321622 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:42:35.321633 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:42:35.321644 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:42:35.321655 | orchestrator | 2025-10-08 15:42:35.321666 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2025-10-08 15:42:35.321676 | orchestrator | Wednesday 08 October 2025 15:39:42 +0000 (0:00:00.657) 0:00:40.024 ***** 2025-10-08 15:42:35.321687 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:42:35.321698 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:42:35.321709 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:42:35.321720 | orchestrator | 2025-10-08 15:42:35.321731 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2025-10-08 15:42:35.321742 | orchestrator | Wednesday 08 October 2025 15:39:44 +0000 (0:00:01.133) 0:00:41.158 ***** 2025-10-08 15:42:35.321752 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:42:35.321763 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:42:35.321774 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:42:35.321785 | orchestrator | 2025-10-08 15:42:35.321796 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2025-10-08 15:42:35.321807 | orchestrator | Wednesday 08 October 2025 15:39:45 +0000 (0:00:01.751) 0:00:42.910 ***** 2025-10-08 15:42:35.321818 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:42:35.321828 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:42:35.321839 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:42:35.321850 | orchestrator | 2025-10-08 15:42:35.321861 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2025-10-08 15:42:35.321872 | orchestrator | Wednesday 08 October 2025 15:39:46 +0000 (0:00:00.612) 0:00:43.522 ***** 2025-10-08 15:42:35.321883 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:42:35.321894 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:42:35.321904 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:42:35.321915 | orchestrator | 2025-10-08 15:42:35.321926 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2025-10-08 15:42:35.321943 | orchestrator | Wednesday 08 October 2025 15:39:46 +0000 (0:00:00.347) 0:00:43.870 ***** 2025-10-08 15:42:35.321954 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:42:35.321965 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:42:35.321976 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:42:35.321987 | orchestrator | 2025-10-08 15:42:35.322004 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2025-10-08 15:42:35.322128 | orchestrator | Wednesday 08 October 2025 15:39:48 +0000 (0:00:02.077) 0:00:45.947 ***** 2025-10-08 15:42:35.322146 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-10-08 15:42:35.322158 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-10-08 15:42:35.322170 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-10-08 15:42:35.322192 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-10-08 15:42:35.322204 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-10-08 15:42:35.322215 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-10-08 15:42:35.322226 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-10-08 15:42:35.322237 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-10-08 15:42:35.322248 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-10-08 15:42:35.322259 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-10-08 15:42:35.322270 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-10-08 15:42:35.322281 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-10-08 15:42:35.322292 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:42:35.322303 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:42:35.322314 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:42:35.322325 | orchestrator | 2025-10-08 15:42:35.322336 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2025-10-08 15:42:35.322347 | orchestrator | Wednesday 08 October 2025 15:40:33 +0000 (0:00:44.590) 0:01:30.538 ***** 2025-10-08 15:42:35.322358 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:42:35.322369 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:42:35.322380 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:42:35.322391 | orchestrator | 2025-10-08 15:42:35.322402 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2025-10-08 15:42:35.322413 | orchestrator | Wednesday 08 October 2025 15:40:33 +0000 (0:00:00.331) 0:01:30.869 ***** 2025-10-08 15:42:35.322424 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:42:35.322435 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:42:35.322446 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:42:35.322457 | orchestrator | 2025-10-08 15:42:35.322468 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2025-10-08 15:42:35.322479 | orchestrator | Wednesday 08 October 2025 15:40:34 +0000 (0:00:01.151) 0:01:32.020 ***** 2025-10-08 15:42:35.322490 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:42:35.322501 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:42:35.322520 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:42:35.322531 | orchestrator | 2025-10-08 15:42:35.322542 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2025-10-08 15:42:35.322553 | orchestrator | Wednesday 08 October 2025 15:40:36 +0000 (0:00:01.412) 0:01:33.433 ***** 2025-10-08 15:42:35.322564 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:42:35.322575 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:42:35.322586 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:42:35.322597 | orchestrator | 2025-10-08 15:42:35.322608 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2025-10-08 15:42:35.322619 | orchestrator | Wednesday 08 October 2025 15:41:03 +0000 (0:00:27.317) 0:02:00.750 ***** 2025-10-08 15:42:35.322630 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:42:35.322641 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:42:35.322652 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:42:35.322663 | orchestrator | 2025-10-08 15:42:35.322674 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2025-10-08 15:42:35.322685 | orchestrator | Wednesday 08 October 2025 15:41:04 +0000 (0:00:00.643) 0:02:01.393 ***** 2025-10-08 15:42:35.322696 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:42:35.322707 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:42:35.322718 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:42:35.322728 | orchestrator | 2025-10-08 15:42:35.322739 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2025-10-08 15:42:35.322750 | orchestrator | Wednesday 08 October 2025 15:41:04 +0000 (0:00:00.595) 0:02:01.988 ***** 2025-10-08 15:42:35.322761 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:42:35.322772 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:42:35.322783 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:42:35.322794 | orchestrator | 2025-10-08 15:42:35.322805 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2025-10-08 15:42:35.322816 | orchestrator | Wednesday 08 October 2025 15:41:05 +0000 (0:00:00.614) 0:02:02.603 ***** 2025-10-08 15:42:35.322828 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:42:35.322847 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:42:35.322858 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:42:35.322869 | orchestrator | 2025-10-08 15:42:35.322880 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2025-10-08 15:42:35.322891 | orchestrator | Wednesday 08 October 2025 15:41:06 +0000 (0:00:00.766) 0:02:03.369 ***** 2025-10-08 15:42:35.322902 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:42:35.322913 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:42:35.322924 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:42:35.322935 | orchestrator | 2025-10-08 15:42:35.322946 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2025-10-08 15:42:35.322957 | orchestrator | Wednesday 08 October 2025 15:41:06 +0000 (0:00:00.270) 0:02:03.639 ***** 2025-10-08 15:42:35.322968 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:42:35.322979 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:42:35.322990 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:42:35.323001 | orchestrator | 2025-10-08 15:42:35.323012 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2025-10-08 15:42:35.323028 | orchestrator | Wednesday 08 October 2025 15:41:07 +0000 (0:00:00.646) 0:02:04.286 ***** 2025-10-08 15:42:35.323039 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:42:35.323050 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:42:35.323077 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:42:35.323088 | orchestrator | 2025-10-08 15:42:35.323099 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2025-10-08 15:42:35.323110 | orchestrator | Wednesday 08 October 2025 15:41:07 +0000 (0:00:00.609) 0:02:04.896 ***** 2025-10-08 15:42:35.323121 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:42:35.323131 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:42:35.323142 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:42:35.323153 | orchestrator | 2025-10-08 15:42:35.323171 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2025-10-08 15:42:35.323182 | orchestrator | Wednesday 08 October 2025 15:41:08 +0000 (0:00:00.989) 0:02:05.885 ***** 2025-10-08 15:42:35.323193 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:42:35.323204 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:42:35.323215 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:42:35.323226 | orchestrator | 2025-10-08 15:42:35.323237 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2025-10-08 15:42:35.323248 | orchestrator | Wednesday 08 October 2025 15:41:09 +0000 (0:00:00.917) 0:02:06.802 ***** 2025-10-08 15:42:35.323258 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:42:35.323269 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:42:35.323280 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:42:35.323290 | orchestrator | 2025-10-08 15:42:35.323301 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2025-10-08 15:42:35.323312 | orchestrator | Wednesday 08 October 2025 15:41:10 +0000 (0:00:00.287) 0:02:07.089 ***** 2025-10-08 15:42:35.323323 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:42:35.323334 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:42:35.323344 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:42:35.323355 | orchestrator | 2025-10-08 15:42:35.323366 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2025-10-08 15:42:35.323377 | orchestrator | Wednesday 08 October 2025 15:41:10 +0000 (0:00:00.296) 0:02:07.385 ***** 2025-10-08 15:42:35.323388 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:42:35.323399 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:42:35.323410 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:42:35.323420 | orchestrator | 2025-10-08 15:42:35.323431 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2025-10-08 15:42:35.323442 | orchestrator | Wednesday 08 October 2025 15:41:11 +0000 (0:00:01.151) 0:02:08.537 ***** 2025-10-08 15:42:35.323453 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:42:35.323463 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:42:35.323474 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:42:35.323485 | orchestrator | 2025-10-08 15:42:35.323496 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2025-10-08 15:42:35.323507 | orchestrator | Wednesday 08 October 2025 15:41:12 +0000 (0:00:00.717) 0:02:09.255 ***** 2025-10-08 15:42:35.323518 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-10-08 15:42:35.323529 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-10-08 15:42:35.323554 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-10-08 15:42:35.323565 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-10-08 15:42:35.323576 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-10-08 15:42:35.323587 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-10-08 15:42:35.323598 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-10-08 15:42:35.323609 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-10-08 15:42:35.323620 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2025-10-08 15:42:35.323631 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-10-08 15:42:35.323642 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-10-08 15:42:35.323652 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2025-10-08 15:42:35.323663 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-10-08 15:42:35.323686 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-10-08 15:42:35.323698 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-10-08 15:42:35.323709 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-10-08 15:42:35.323720 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-10-08 15:42:35.323731 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-10-08 15:42:35.323741 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-10-08 15:42:35.323752 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-10-08 15:42:35.323763 | orchestrator | 2025-10-08 15:42:35.323774 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2025-10-08 15:42:35.323785 | orchestrator | 2025-10-08 15:42:35.323800 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2025-10-08 15:42:35.323811 | orchestrator | Wednesday 08 October 2025 15:41:15 +0000 (0:00:03.120) 0:02:12.375 ***** 2025-10-08 15:42:35.323822 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:42:35.323833 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:42:35.323844 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:42:35.323855 | orchestrator | 2025-10-08 15:42:35.323866 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2025-10-08 15:42:35.323877 | orchestrator | Wednesday 08 October 2025 15:41:15 +0000 (0:00:00.537) 0:02:12.913 ***** 2025-10-08 15:42:35.323887 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:42:35.323898 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:42:35.323909 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:42:35.323920 | orchestrator | 2025-10-08 15:42:35.323930 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2025-10-08 15:42:35.323941 | orchestrator | Wednesday 08 October 2025 15:41:16 +0000 (0:00:00.640) 0:02:13.553 ***** 2025-10-08 15:42:35.323952 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:42:35.323963 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:42:35.323973 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:42:35.323984 | orchestrator | 2025-10-08 15:42:35.323995 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2025-10-08 15:42:35.324006 | orchestrator | Wednesday 08 October 2025 15:41:16 +0000 (0:00:00.317) 0:02:13.871 ***** 2025-10-08 15:42:35.324016 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-10-08 15:42:35.324027 | orchestrator | 2025-10-08 15:42:35.324038 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2025-10-08 15:42:35.324049 | orchestrator | Wednesday 08 October 2025 15:41:17 +0000 (0:00:00.685) 0:02:14.556 ***** 2025-10-08 15:42:35.324109 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:42:35.324121 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:42:35.324132 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:42:35.324142 | orchestrator | 2025-10-08 15:42:35.324153 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2025-10-08 15:42:35.324164 | orchestrator | Wednesday 08 October 2025 15:41:17 +0000 (0:00:00.319) 0:02:14.875 ***** 2025-10-08 15:42:35.324175 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:42:35.324186 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:42:35.324196 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:42:35.324207 | orchestrator | 2025-10-08 15:42:35.324218 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2025-10-08 15:42:35.324229 | orchestrator | Wednesday 08 October 2025 15:41:18 +0000 (0:00:00.293) 0:02:15.169 ***** 2025-10-08 15:42:35.324239 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:42:35.324250 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:42:35.324268 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:42:35.324279 | orchestrator | 2025-10-08 15:42:35.324290 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2025-10-08 15:42:35.324300 | orchestrator | Wednesday 08 October 2025 15:41:18 +0000 (0:00:00.303) 0:02:15.473 ***** 2025-10-08 15:42:35.324311 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:42:35.324322 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:42:35.324333 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:42:35.324343 | orchestrator | 2025-10-08 15:42:35.324354 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2025-10-08 15:42:35.324365 | orchestrator | Wednesday 08 October 2025 15:41:19 +0000 (0:00:00.837) 0:02:16.311 ***** 2025-10-08 15:42:35.324376 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:42:35.324386 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:42:35.324397 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:42:35.324408 | orchestrator | 2025-10-08 15:42:35.324419 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2025-10-08 15:42:35.324429 | orchestrator | Wednesday 08 October 2025 15:41:20 +0000 (0:00:01.249) 0:02:17.561 ***** 2025-10-08 15:42:35.324440 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:42:35.324451 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:42:35.324462 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:42:35.324472 | orchestrator | 2025-10-08 15:42:35.324483 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2025-10-08 15:42:35.324494 | orchestrator | Wednesday 08 October 2025 15:41:21 +0000 (0:00:01.316) 0:02:18.877 ***** 2025-10-08 15:42:35.324505 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:42:35.324515 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:42:35.324526 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:42:35.324537 | orchestrator | 2025-10-08 15:42:35.324548 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-10-08 15:42:35.324559 | orchestrator | 2025-10-08 15:42:35.324569 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-10-08 15:42:35.324580 | orchestrator | Wednesday 08 October 2025 15:41:34 +0000 (0:00:12.988) 0:02:31.866 ***** 2025-10-08 15:42:35.324591 | orchestrator | ok: [testbed-manager] 2025-10-08 15:42:35.324602 | orchestrator | 2025-10-08 15:42:35.324612 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-10-08 15:42:35.324623 | orchestrator | Wednesday 08 October 2025 15:41:35 +0000 (0:00:00.824) 0:02:32.691 ***** 2025-10-08 15:42:35.324639 | orchestrator | changed: [testbed-manager] 2025-10-08 15:42:35.324649 | orchestrator | 2025-10-08 15:42:35.324658 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-10-08 15:42:35.324668 | orchestrator | Wednesday 08 October 2025 15:41:36 +0000 (0:00:00.510) 0:02:33.201 ***** 2025-10-08 15:42:35.324678 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-10-08 15:42:35.324688 | orchestrator | 2025-10-08 15:42:35.324698 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-10-08 15:42:35.324707 | orchestrator | Wednesday 08 October 2025 15:41:36 +0000 (0:00:00.553) 0:02:33.755 ***** 2025-10-08 15:42:35.324717 | orchestrator | changed: [testbed-manager] 2025-10-08 15:42:35.324727 | orchestrator | 2025-10-08 15:42:35.324737 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-10-08 15:42:35.324747 | orchestrator | Wednesday 08 October 2025 15:41:37 +0000 (0:00:00.918) 0:02:34.673 ***** 2025-10-08 15:42:35.324757 | orchestrator | changed: [testbed-manager] 2025-10-08 15:42:35.324767 | orchestrator | 2025-10-08 15:42:35.324781 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-10-08 15:42:35.324791 | orchestrator | Wednesday 08 October 2025 15:41:38 +0000 (0:00:00.630) 0:02:35.304 ***** 2025-10-08 15:42:35.324801 | orchestrator | changed: [testbed-manager -> localhost] 2025-10-08 15:42:35.324811 | orchestrator | 2025-10-08 15:42:35.324821 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-10-08 15:42:35.324830 | orchestrator | Wednesday 08 October 2025 15:41:39 +0000 (0:00:01.622) 0:02:36.927 ***** 2025-10-08 15:42:35.324847 | orchestrator | changed: [testbed-manager -> localhost] 2025-10-08 15:42:35.324857 | orchestrator | 2025-10-08 15:42:35.324867 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-10-08 15:42:35.324877 | orchestrator | Wednesday 08 October 2025 15:41:40 +0000 (0:00:00.874) 0:02:37.801 ***** 2025-10-08 15:42:35.324887 | orchestrator | changed: [testbed-manager] 2025-10-08 15:42:35.324896 | orchestrator | 2025-10-08 15:42:35.324906 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-10-08 15:42:35.324916 | orchestrator | Wednesday 08 October 2025 15:41:41 +0000 (0:00:00.438) 0:02:38.240 ***** 2025-10-08 15:42:35.324926 | orchestrator | changed: [testbed-manager] 2025-10-08 15:42:35.324935 | orchestrator | 2025-10-08 15:42:35.324945 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2025-10-08 15:42:35.324955 | orchestrator | 2025-10-08 15:42:35.324964 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2025-10-08 15:42:35.324974 | orchestrator | Wednesday 08 October 2025 15:41:41 +0000 (0:00:00.541) 0:02:38.781 ***** 2025-10-08 15:42:35.324984 | orchestrator | ok: [testbed-manager] 2025-10-08 15:42:35.324994 | orchestrator | 2025-10-08 15:42:35.325003 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2025-10-08 15:42:35.325013 | orchestrator | Wednesday 08 October 2025 15:41:41 +0000 (0:00:00.113) 0:02:38.895 ***** 2025-10-08 15:42:35.325023 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2025-10-08 15:42:35.325033 | orchestrator | 2025-10-08 15:42:35.325042 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2025-10-08 15:42:35.325066 | orchestrator | Wednesday 08 October 2025 15:41:42 +0000 (0:00:00.203) 0:02:39.098 ***** 2025-10-08 15:42:35.325076 | orchestrator | ok: [testbed-manager] 2025-10-08 15:42:35.325086 | orchestrator | 2025-10-08 15:42:35.325096 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2025-10-08 15:42:35.325106 | orchestrator | Wednesday 08 October 2025 15:41:42 +0000 (0:00:00.666) 0:02:39.764 ***** 2025-10-08 15:42:35.325116 | orchestrator | ok: [testbed-manager] 2025-10-08 15:42:35.325126 | orchestrator | 2025-10-08 15:42:35.325135 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2025-10-08 15:42:35.325145 | orchestrator | Wednesday 08 October 2025 15:41:43 +0000 (0:00:01.227) 0:02:40.992 ***** 2025-10-08 15:42:35.325155 | orchestrator | changed: [testbed-manager] 2025-10-08 15:42:35.325165 | orchestrator | 2025-10-08 15:42:35.325175 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2025-10-08 15:42:35.325185 | orchestrator | Wednesday 08 October 2025 15:41:44 +0000 (0:00:00.811) 0:02:41.804 ***** 2025-10-08 15:42:35.325194 | orchestrator | ok: [testbed-manager] 2025-10-08 15:42:35.325204 | orchestrator | 2025-10-08 15:42:35.325214 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2025-10-08 15:42:35.325223 | orchestrator | Wednesday 08 October 2025 15:41:45 +0000 (0:00:00.396) 0:02:42.200 ***** 2025-10-08 15:42:35.325233 | orchestrator | changed: [testbed-manager] 2025-10-08 15:42:35.325243 | orchestrator | 2025-10-08 15:42:35.325253 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2025-10-08 15:42:35.325263 | orchestrator | Wednesday 08 October 2025 15:41:52 +0000 (0:00:07.574) 0:02:49.775 ***** 2025-10-08 15:42:35.325272 | orchestrator | changed: [testbed-manager] 2025-10-08 15:42:35.325282 | orchestrator | 2025-10-08 15:42:35.325292 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2025-10-08 15:42:35.325302 | orchestrator | Wednesday 08 October 2025 15:42:04 +0000 (0:00:11.862) 0:03:01.637 ***** 2025-10-08 15:42:35.325311 | orchestrator | ok: [testbed-manager] 2025-10-08 15:42:35.325321 | orchestrator | 2025-10-08 15:42:35.325331 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2025-10-08 15:42:35.325341 | orchestrator | 2025-10-08 15:42:35.325351 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2025-10-08 15:42:35.325367 | orchestrator | Wednesday 08 October 2025 15:42:05 +0000 (0:00:00.440) 0:03:02.078 ***** 2025-10-08 15:42:35.325376 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:42:35.325386 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:42:35.325396 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:42:35.325406 | orchestrator | 2025-10-08 15:42:35.325416 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2025-10-08 15:42:35.325425 | orchestrator | Wednesday 08 October 2025 15:42:05 +0000 (0:00:00.270) 0:03:02.348 ***** 2025-10-08 15:42:35.325435 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:42:35.325445 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:42:35.325455 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:42:35.325465 | orchestrator | 2025-10-08 15:42:35.325479 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2025-10-08 15:42:35.325490 | orchestrator | Wednesday 08 October 2025 15:42:05 +0000 (0:00:00.260) 0:03:02.609 ***** 2025-10-08 15:42:35.325500 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-08 15:42:35.325509 | orchestrator | 2025-10-08 15:42:35.325519 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2025-10-08 15:42:35.325529 | orchestrator | Wednesday 08 October 2025 15:42:06 +0000 (0:00:00.656) 0:03:03.265 ***** 2025-10-08 15:42:35.325539 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:42:35.325548 | orchestrator | 2025-10-08 15:42:35.325558 | orchestrator | TASK [k3s_server_post : Check if Cilium CLI is installed] ********************** 2025-10-08 15:42:35.325568 | orchestrator | Wednesday 08 October 2025 15:42:06 +0000 (0:00:00.215) 0:03:03.481 ***** 2025-10-08 15:42:35.325578 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:42:35.325587 | orchestrator | 2025-10-08 15:42:35.325601 | orchestrator | TASK [k3s_server_post : Check for Cilium CLI version in command output] ******** 2025-10-08 15:42:35.325611 | orchestrator | Wednesday 08 October 2025 15:42:06 +0000 (0:00:00.237) 0:03:03.718 ***** 2025-10-08 15:42:35.325621 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:42:35.325631 | orchestrator | 2025-10-08 15:42:35.325640 | orchestrator | TASK [k3s_server_post : Get latest stable Cilium CLI version file] ************* 2025-10-08 15:42:35.325650 | orchestrator | Wednesday 08 October 2025 15:42:06 +0000 (0:00:00.249) 0:03:03.968 ***** 2025-10-08 15:42:35.325660 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:42:35.325669 | orchestrator | 2025-10-08 15:42:35.325679 | orchestrator | TASK [k3s_server_post : Read Cilium CLI stable version from file] ************** 2025-10-08 15:42:35.325689 | orchestrator | Wednesday 08 October 2025 15:42:07 +0000 (0:00:00.179) 0:03:04.148 ***** 2025-10-08 15:42:35.325698 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:42:35.325708 | orchestrator | 2025-10-08 15:42:35.325718 | orchestrator | TASK [k3s_server_post : Log installed Cilium CLI version] ********************** 2025-10-08 15:42:35.325727 | orchestrator | Wednesday 08 October 2025 15:42:07 +0000 (0:00:00.203) 0:03:04.351 ***** 2025-10-08 15:42:35.325737 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:42:35.325747 | orchestrator | 2025-10-08 15:42:35.325756 | orchestrator | TASK [k3s_server_post : Log latest stable Cilium CLI version] ****************** 2025-10-08 15:42:35.325766 | orchestrator | Wednesday 08 October 2025 15:42:07 +0000 (0:00:00.185) 0:03:04.536 ***** 2025-10-08 15:42:35.325775 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:42:35.325785 | orchestrator | 2025-10-08 15:42:35.325795 | orchestrator | TASK [k3s_server_post : Determine if Cilium CLI needs installation or update] *** 2025-10-08 15:42:35.325804 | orchestrator | Wednesday 08 October 2025 15:42:07 +0000 (0:00:00.437) 0:03:04.974 ***** 2025-10-08 15:42:35.325814 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:42:35.325823 | orchestrator | 2025-10-08 15:42:35.325833 | orchestrator | TASK [k3s_server_post : Set architecture variable] ***************************** 2025-10-08 15:42:35.325842 | orchestrator | Wednesday 08 October 2025 15:42:08 +0000 (0:00:00.230) 0:03:05.204 ***** 2025-10-08 15:42:35.325852 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:42:35.325862 | orchestrator | 2025-10-08 15:42:35.325871 | orchestrator | TASK [k3s_server_post : Download Cilium CLI and checksum] ********************** 2025-10-08 15:42:35.325887 | orchestrator | Wednesday 08 October 2025 15:42:08 +0000 (0:00:00.705) 0:03:05.910 ***** 2025-10-08 15:42:35.325896 | orchestrator | skipping: [testbed-node-0] => (item=.tar.gz)  2025-10-08 15:42:35.325907 | orchestrator | skipping: [testbed-node-0] => (item=.tar.gz.sha256sum)  2025-10-08 15:42:35.325916 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:42:35.325926 | orchestrator | 2025-10-08 15:42:35.325936 | orchestrator | TASK [k3s_server_post : Verify the downloaded tarball] ************************* 2025-10-08 15:42:35.325945 | orchestrator | Wednesday 08 October 2025 15:42:09 +0000 (0:00:00.346) 0:03:06.256 ***** 2025-10-08 15:42:35.325955 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:42:35.325965 | orchestrator | 2025-10-08 15:42:35.325974 | orchestrator | TASK [k3s_server_post : Extract Cilium CLI to /usr/local/bin] ****************** 2025-10-08 15:42:35.325984 | orchestrator | Wednesday 08 October 2025 15:42:09 +0000 (0:00:00.231) 0:03:06.488 ***** 2025-10-08 15:42:35.325994 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:42:35.326004 | orchestrator | 2025-10-08 15:42:35.326013 | orchestrator | TASK [k3s_server_post : Remove downloaded tarball and checksum file] *********** 2025-10-08 15:42:35.326047 | orchestrator | Wednesday 08 October 2025 15:42:09 +0000 (0:00:00.224) 0:03:06.712 ***** 2025-10-08 15:42:35.326072 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:42:35.326082 | orchestrator | 2025-10-08 15:42:35.326091 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2025-10-08 15:42:35.326101 | orchestrator | Wednesday 08 October 2025 15:42:10 +0000 (0:00:00.365) 0:03:07.078 ***** 2025-10-08 15:42:35.326111 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:42:35.326121 | orchestrator | 2025-10-08 15:42:35.326130 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2025-10-08 15:42:35.326140 | orchestrator | Wednesday 08 October 2025 15:42:10 +0000 (0:00:00.380) 0:03:07.458 ***** 2025-10-08 15:42:35.326150 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:42:35.326159 | orchestrator | 2025-10-08 15:42:35.326169 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2025-10-08 15:42:35.326179 | orchestrator | Wednesday 08 October 2025 15:42:10 +0000 (0:00:00.304) 0:03:07.764 ***** 2025-10-08 15:42:35.326188 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:42:35.326198 | orchestrator | 2025-10-08 15:42:35.326208 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2025-10-08 15:42:35.326218 | orchestrator | Wednesday 08 October 2025 15:42:10 +0000 (0:00:00.238) 0:03:08.003 ***** 2025-10-08 15:42:35.326227 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:42:35.326237 | orchestrator | 2025-10-08 15:42:35.326247 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2025-10-08 15:42:35.326257 | orchestrator | Wednesday 08 October 2025 15:42:11 +0000 (0:00:00.206) 0:03:08.209 ***** 2025-10-08 15:42:35.326266 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:42:35.326276 | orchestrator | 2025-10-08 15:42:35.326286 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2025-10-08 15:42:35.326302 | orchestrator | Wednesday 08 October 2025 15:42:11 +0000 (0:00:00.201) 0:03:08.410 ***** 2025-10-08 15:42:35.326312 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:42:35.326322 | orchestrator | 2025-10-08 15:42:35.326331 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2025-10-08 15:42:35.326341 | orchestrator | Wednesday 08 October 2025 15:42:11 +0000 (0:00:00.204) 0:03:08.614 ***** 2025-10-08 15:42:35.326351 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:42:35.326361 | orchestrator | 2025-10-08 15:42:35.326370 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2025-10-08 15:42:35.326380 | orchestrator | Wednesday 08 October 2025 15:42:11 +0000 (0:00:00.185) 0:03:08.800 ***** 2025-10-08 15:42:35.326390 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:42:35.326400 | orchestrator | 2025-10-08 15:42:35.326410 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2025-10-08 15:42:35.326420 | orchestrator | Wednesday 08 October 2025 15:42:12 +0000 (0:00:00.770) 0:03:09.570 ***** 2025-10-08 15:42:35.326440 | orchestrator | skipping: [testbed-node-0] => (item=deployment/cilium-operator)  2025-10-08 15:42:35.326450 | orchestrator | skipping: [testbed-node-0] => (item=daemonset/cilium)  2025-10-08 15:42:35.326460 | orchestrator | skipping: [testbed-node-0] => (item=deployment/hubble-relay)  2025-10-08 15:42:35.326470 | orchestrator | skipping: [testbed-node-0] => (item=deployment/hubble-ui)  2025-10-08 15:42:35.326479 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:42:35.326489 | orchestrator | 2025-10-08 15:42:35.326499 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2025-10-08 15:42:35.326508 | orchestrator | Wednesday 08 October 2025 15:42:13 +0000 (0:00:00.507) 0:03:10.077 ***** 2025-10-08 15:42:35.326518 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:42:35.326527 | orchestrator | 2025-10-08 15:42:35.326537 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2025-10-08 15:42:35.326547 | orchestrator | Wednesday 08 October 2025 15:42:13 +0000 (0:00:00.222) 0:03:10.300 ***** 2025-10-08 15:42:35.326556 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:42:35.326566 | orchestrator | 2025-10-08 15:42:35.326576 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2025-10-08 15:42:35.326585 | orchestrator | Wednesday 08 October 2025 15:42:13 +0000 (0:00:00.170) 0:03:10.471 ***** 2025-10-08 15:42:35.326595 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:42:35.326604 | orchestrator | 2025-10-08 15:42:35.326614 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2025-10-08 15:42:35.326624 | orchestrator | Wednesday 08 October 2025 15:42:13 +0000 (0:00:00.166) 0:03:10.637 ***** 2025-10-08 15:42:35.326633 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:42:35.326643 | orchestrator | 2025-10-08 15:42:35.326653 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2025-10-08 15:42:35.326662 | orchestrator | Wednesday 08 October 2025 15:42:13 +0000 (0:00:00.193) 0:03:10.831 ***** 2025-10-08 15:42:35.326672 | orchestrator | skipping: [testbed-node-0] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io)  2025-10-08 15:42:35.326682 | orchestrator | skipping: [testbed-node-0] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io)  2025-10-08 15:42:35.326692 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:42:35.326701 | orchestrator | 2025-10-08 15:42:35.326711 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2025-10-08 15:42:35.326721 | orchestrator | Wednesday 08 October 2025 15:42:14 +0000 (0:00:00.322) 0:03:11.154 ***** 2025-10-08 15:42:35.326731 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:42:35.326740 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:42:35.326750 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:42:35.326760 | orchestrator | 2025-10-08 15:42:35.326769 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2025-10-08 15:42:35.326779 | orchestrator | Wednesday 08 October 2025 15:42:14 +0000 (0:00:00.273) 0:03:11.428 ***** 2025-10-08 15:42:35.326789 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:42:35.326798 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:42:35.326808 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:42:35.326818 | orchestrator | 2025-10-08 15:42:35.326828 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2025-10-08 15:42:35.326837 | orchestrator | 2025-10-08 15:42:35.326847 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2025-10-08 15:42:35.326857 | orchestrator | Wednesday 08 October 2025 15:42:15 +0000 (0:00:01.152) 0:03:12.580 ***** 2025-10-08 15:42:35.326867 | orchestrator | ok: [testbed-manager] 2025-10-08 15:42:35.326876 | orchestrator | 2025-10-08 15:42:35.326886 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2025-10-08 15:42:35.326896 | orchestrator | Wednesday 08 October 2025 15:42:15 +0000 (0:00:00.220) 0:03:12.800 ***** 2025-10-08 15:42:35.326905 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2025-10-08 15:42:35.326915 | orchestrator | 2025-10-08 15:42:35.326934 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2025-10-08 15:42:35.326943 | orchestrator | Wednesday 08 October 2025 15:42:15 +0000 (0:00:00.261) 0:03:13.062 ***** 2025-10-08 15:42:35.326953 | orchestrator | changed: [testbed-manager] 2025-10-08 15:42:35.326963 | orchestrator | 2025-10-08 15:42:35.326973 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2025-10-08 15:42:35.326982 | orchestrator | 2025-10-08 15:42:35.326992 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2025-10-08 15:42:35.327001 | orchestrator | Wednesday 08 October 2025 15:42:21 +0000 (0:00:05.148) 0:03:18.211 ***** 2025-10-08 15:42:35.327011 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:42:35.327021 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:42:35.327030 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:42:35.327040 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:42:35.327050 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:42:35.327073 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:42:35.327083 | orchestrator | 2025-10-08 15:42:35.327092 | orchestrator | TASK [Manage labels] *********************************************************** 2025-10-08 15:42:35.327102 | orchestrator | Wednesday 08 October 2025 15:42:22 +0000 (0:00:01.071) 0:03:19.283 ***** 2025-10-08 15:42:35.327117 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-10-08 15:42:35.327127 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-10-08 15:42:35.327137 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-10-08 15:42:35.327146 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-10-08 15:42:35.327156 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-10-08 15:42:35.327165 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-10-08 15:42:35.327175 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-10-08 15:42:35.327192 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-10-08 15:42:35.327202 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2025-10-08 15:42:35.327212 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2025-10-08 15:42:35.327221 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-10-08 15:42:35.327231 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2025-10-08 15:42:35.327240 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-10-08 15:42:35.327250 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-10-08 15:42:35.327260 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-10-08 15:42:35.327269 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-10-08 15:42:35.327278 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-10-08 15:42:35.327288 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-10-08 15:42:35.327298 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-10-08 15:42:35.327307 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-10-08 15:42:35.327317 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-10-08 15:42:35.327326 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-10-08 15:42:35.327336 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-10-08 15:42:35.327346 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-10-08 15:42:35.327362 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-10-08 15:42:35.327372 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-10-08 15:42:35.327381 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-10-08 15:42:35.327391 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-10-08 15:42:35.327401 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-10-08 15:42:35.327410 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-10-08 15:42:35.327420 | orchestrator | 2025-10-08 15:42:35.327429 | orchestrator | TASK [Manage annotations] ****************************************************** 2025-10-08 15:42:35.327439 | orchestrator | Wednesday 08 October 2025 15:42:33 +0000 (0:00:11.732) 0:03:31.015 ***** 2025-10-08 15:42:35.327449 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:42:35.327459 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:42:35.327468 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:42:35.327478 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:42:35.327488 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:42:35.327497 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:42:35.327507 | orchestrator | 2025-10-08 15:42:35.327516 | orchestrator | TASK [Manage taints] *********************************************************** 2025-10-08 15:42:35.327526 | orchestrator | Wednesday 08 October 2025 15:42:34 +0000 (0:00:00.576) 0:03:31.592 ***** 2025-10-08 15:42:35.327536 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:42:35.327545 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:42:35.327555 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:42:35.327564 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:42:35.327574 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:42:35.327584 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:42:35.327594 | orchestrator | 2025-10-08 15:42:35.327603 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-08 15:42:35.327613 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-08 15:42:35.327624 | orchestrator | testbed-node-0 : ok=42  changed=20  unreachable=0 failed=0 skipped=45  rescued=0 ignored=0 2025-10-08 15:42:35.327634 | orchestrator | testbed-node-1 : ok=39  changed=17  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-10-08 15:42:35.327649 | orchestrator | testbed-node-2 : ok=39  changed=17  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-10-08 15:42:35.327659 | orchestrator | testbed-node-3 : ok=19  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-10-08 15:42:35.327669 | orchestrator | testbed-node-4 : ok=19  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-10-08 15:42:35.327679 | orchestrator | testbed-node-5 : ok=19  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-10-08 15:42:35.327689 | orchestrator | 2025-10-08 15:42:35.327699 | orchestrator | 2025-10-08 15:42:35.327709 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-08 15:42:35.327724 | orchestrator | Wednesday 08 October 2025 15:42:34 +0000 (0:00:00.395) 0:03:31.987 ***** 2025-10-08 15:42:35.327734 | orchestrator | =============================================================================== 2025-10-08 15:42:35.327744 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 44.59s 2025-10-08 15:42:35.327760 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 27.32s 2025-10-08 15:42:35.327770 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 12.99s 2025-10-08 15:42:35.327779 | orchestrator | kubectl : Install required packages ------------------------------------ 11.86s 2025-10-08 15:42:35.327789 | orchestrator | Manage labels ---------------------------------------------------------- 11.73s 2025-10-08 15:42:35.327798 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 7.57s 2025-10-08 15:42:35.327808 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 5.49s 2025-10-08 15:42:35.327817 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 5.15s 2025-10-08 15:42:35.327827 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.12s 2025-10-08 15:42:35.327837 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.63s 2025-10-08 15:42:35.327846 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 2.44s 2025-10-08 15:42:35.327856 | orchestrator | k3s_custom_registries : Create directory /etc/rancher/k3s --------------- 2.38s 2025-10-08 15:42:35.327866 | orchestrator | k3s_server : Init cluster inside the transient k3s-init service --------- 2.08s 2025-10-08 15:42:35.327875 | orchestrator | k3s_prereq : Enable IPv6 router advertisements -------------------------- 2.03s 2025-10-08 15:42:35.327885 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 1.98s 2025-10-08 15:42:35.327894 | orchestrator | k3s_server : Copy vip manifest to first master -------------------------- 1.75s 2025-10-08 15:42:35.327904 | orchestrator | k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml --- 1.65s 2025-10-08 15:42:35.327913 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.62s 2025-10-08 15:42:35.327923 | orchestrator | k3s_download : Download k3s binary arm64 -------------------------------- 1.62s 2025-10-08 15:42:35.327933 | orchestrator | k3s_server : Deploy vip manifest ---------------------------------------- 1.57s 2025-10-08 15:42:35.327942 | orchestrator | 2025-10-08 15:42:35 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:42:35.327952 | orchestrator | 2025-10-08 15:42:35 | INFO  | Task 41436c97-5f8c-4e16-b6f7-e26cf48fe2a9 is in state STARTED 2025-10-08 15:42:35.327962 | orchestrator | 2025-10-08 15:42:35 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:42:38.391414 | orchestrator | 2025-10-08 15:42:38 | INFO  | Task fccaece7-f16a-4b63-909c-fd8412342af3 is in state STARTED 2025-10-08 15:42:38.391702 | orchestrator | 2025-10-08 15:42:38 | INFO  | Task e53a9322-0e67-4e51-805e-3cd7852832cc is in state STARTED 2025-10-08 15:42:38.392099 | orchestrator | 2025-10-08 15:42:38 | INFO  | Task c0e122ed-ffbb-4528-9b9b-8a9caf861c54 is in state STARTED 2025-10-08 15:42:38.395501 | orchestrator | 2025-10-08 15:42:38 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:42:38.398452 | orchestrator | 2025-10-08 15:42:38 | INFO  | Task 41436c97-5f8c-4e16-b6f7-e26cf48fe2a9 is in state STARTED 2025-10-08 15:42:38.398932 | orchestrator | 2025-10-08 15:42:38 | INFO  | Task 16fd9a97-509f-4b5d-8e42-0eb59139603b is in state STARTED 2025-10-08 15:42:38.398955 | orchestrator | 2025-10-08 15:42:38 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:42:41.525466 | orchestrator | 2025-10-08 15:42:41 | INFO  | Task fccaece7-f16a-4b63-909c-fd8412342af3 is in state STARTED 2025-10-08 15:42:41.525533 | orchestrator | 2025-10-08 15:42:41 | INFO  | Task e53a9322-0e67-4e51-805e-3cd7852832cc is in state STARTED 2025-10-08 15:42:41.525546 | orchestrator | 2025-10-08 15:42:41 | INFO  | Task c0e122ed-ffbb-4528-9b9b-8a9caf861c54 is in state STARTED 2025-10-08 15:42:41.525558 | orchestrator | 2025-10-08 15:42:41 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:42:41.525600 | orchestrator | 2025-10-08 15:42:41 | INFO  | Task 41436c97-5f8c-4e16-b6f7-e26cf48fe2a9 is in state STARTED 2025-10-08 15:42:41.525612 | orchestrator | 2025-10-08 15:42:41 | INFO  | Task 16fd9a97-509f-4b5d-8e42-0eb59139603b is in state STARTED 2025-10-08 15:42:41.525623 | orchestrator | 2025-10-08 15:42:41 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:42:44.568814 | orchestrator | 2025-10-08 15:42:44 | INFO  | Task fccaece7-f16a-4b63-909c-fd8412342af3 is in state STARTED 2025-10-08 15:42:44.568916 | orchestrator | 2025-10-08 15:42:44 | INFO  | Task e53a9322-0e67-4e51-805e-3cd7852832cc is in state STARTED 2025-10-08 15:42:44.568950 | orchestrator | 2025-10-08 15:42:44 | INFO  | Task c0e122ed-ffbb-4528-9b9b-8a9caf861c54 is in state SUCCESS 2025-10-08 15:42:44.568962 | orchestrator | 2025-10-08 15:42:44 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:42:44.568973 | orchestrator | 2025-10-08 15:42:44 | INFO  | Task 41436c97-5f8c-4e16-b6f7-e26cf48fe2a9 is in state STARTED 2025-10-08 15:42:44.568984 | orchestrator | 2025-10-08 15:42:44 | INFO  | Task 16fd9a97-509f-4b5d-8e42-0eb59139603b is in state STARTED 2025-10-08 15:42:44.568995 | orchestrator | 2025-10-08 15:42:44 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:42:47.523878 | orchestrator | 2025-10-08 15:42:47 | INFO  | Task fccaece7-f16a-4b63-909c-fd8412342af3 is in state STARTED 2025-10-08 15:42:47.524361 | orchestrator | 2025-10-08 15:42:47 | INFO  | Task e53a9322-0e67-4e51-805e-3cd7852832cc is in state STARTED 2025-10-08 15:42:47.525011 | orchestrator | 2025-10-08 15:42:47 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:42:47.526389 | orchestrator | 2025-10-08 15:42:47 | INFO  | Task 41436c97-5f8c-4e16-b6f7-e26cf48fe2a9 is in state STARTED 2025-10-08 15:42:47.526802 | orchestrator | 2025-10-08 15:42:47 | INFO  | Task 16fd9a97-509f-4b5d-8e42-0eb59139603b is in state SUCCESS 2025-10-08 15:42:47.526913 | orchestrator | 2025-10-08 15:42:47 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:42:50.567561 | orchestrator | 2025-10-08 15:42:50 | INFO  | Task fccaece7-f16a-4b63-909c-fd8412342af3 is in state STARTED 2025-10-08 15:42:50.568841 | orchestrator | 2025-10-08 15:42:50 | INFO  | Task e53a9322-0e67-4e51-805e-3cd7852832cc is in state STARTED 2025-10-08 15:42:50.569723 | orchestrator | 2025-10-08 15:42:50 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:42:50.571199 | orchestrator | 2025-10-08 15:42:50 | INFO  | Task 41436c97-5f8c-4e16-b6f7-e26cf48fe2a9 is in state STARTED 2025-10-08 15:42:50.571303 | orchestrator | 2025-10-08 15:42:50 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:42:53.615098 | orchestrator | 2025-10-08 15:42:53 | INFO  | Task fccaece7-f16a-4b63-909c-fd8412342af3 is in state STARTED 2025-10-08 15:42:53.615604 | orchestrator | 2025-10-08 15:42:53 | INFO  | Task e53a9322-0e67-4e51-805e-3cd7852832cc is in state STARTED 2025-10-08 15:42:53.616557 | orchestrator | 2025-10-08 15:42:53 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:42:53.617691 | orchestrator | 2025-10-08 15:42:53 | INFO  | Task 41436c97-5f8c-4e16-b6f7-e26cf48fe2a9 is in state STARTED 2025-10-08 15:42:53.617718 | orchestrator | 2025-10-08 15:42:53 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:42:56.670901 | orchestrator | 2025-10-08 15:42:56 | INFO  | Task fccaece7-f16a-4b63-909c-fd8412342af3 is in state STARTED 2025-10-08 15:42:56.670996 | orchestrator | 2025-10-08 15:42:56 | INFO  | Task e53a9322-0e67-4e51-805e-3cd7852832cc is in state STARTED 2025-10-08 15:42:56.671038 | orchestrator | 2025-10-08 15:42:56 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:42:56.671731 | orchestrator | 2025-10-08 15:42:56 | INFO  | Task 41436c97-5f8c-4e16-b6f7-e26cf48fe2a9 is in state STARTED 2025-10-08 15:42:56.671757 | orchestrator | 2025-10-08 15:42:56 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:42:59.701707 | orchestrator | 2025-10-08 15:42:59 | INFO  | Task fccaece7-f16a-4b63-909c-fd8412342af3 is in state STARTED 2025-10-08 15:42:59.702382 | orchestrator | 2025-10-08 15:42:59 | INFO  | Task e53a9322-0e67-4e51-805e-3cd7852832cc is in state STARTED 2025-10-08 15:42:59.704484 | orchestrator | 2025-10-08 15:42:59 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:42:59.707389 | orchestrator | 2025-10-08 15:42:59 | INFO  | Task 41436c97-5f8c-4e16-b6f7-e26cf48fe2a9 is in state STARTED 2025-10-08 15:42:59.707423 | orchestrator | 2025-10-08 15:42:59 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:43:02.752823 | orchestrator | 2025-10-08 15:43:02 | INFO  | Task fccaece7-f16a-4b63-909c-fd8412342af3 is in state STARTED 2025-10-08 15:43:02.753908 | orchestrator | 2025-10-08 15:43:02 | INFO  | Task e53a9322-0e67-4e51-805e-3cd7852832cc is in state STARTED 2025-10-08 15:43:02.755267 | orchestrator | 2025-10-08 15:43:02 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:43:02.755290 | orchestrator | 2025-10-08 15:43:02 | INFO  | Task 41436c97-5f8c-4e16-b6f7-e26cf48fe2a9 is in state STARTED 2025-10-08 15:43:02.755303 | orchestrator | 2025-10-08 15:43:02 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:43:05.792697 | orchestrator | 2025-10-08 15:43:05 | INFO  | Task fccaece7-f16a-4b63-909c-fd8412342af3 is in state STARTED 2025-10-08 15:43:05.793910 | orchestrator | 2025-10-08 15:43:05 | INFO  | Task e53a9322-0e67-4e51-805e-3cd7852832cc is in state STARTED 2025-10-08 15:43:05.796804 | orchestrator | 2025-10-08 15:43:05 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:43:05.798144 | orchestrator | 2025-10-08 15:43:05 | INFO  | Task 41436c97-5f8c-4e16-b6f7-e26cf48fe2a9 is in state STARTED 2025-10-08 15:43:05.798167 | orchestrator | 2025-10-08 15:43:05 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:43:08.827865 | orchestrator | 2025-10-08 15:43:08 | INFO  | Task fccaece7-f16a-4b63-909c-fd8412342af3 is in state STARTED 2025-10-08 15:43:08.830741 | orchestrator | 2025-10-08 15:43:08 | INFO  | Task e53a9322-0e67-4e51-805e-3cd7852832cc is in state STARTED 2025-10-08 15:43:08.831370 | orchestrator | 2025-10-08 15:43:08 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:43:08.833708 | orchestrator | 2025-10-08 15:43:08 | INFO  | Task 41436c97-5f8c-4e16-b6f7-e26cf48fe2a9 is in state STARTED 2025-10-08 15:43:08.833734 | orchestrator | 2025-10-08 15:43:08 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:43:11.871572 | orchestrator | 2025-10-08 15:43:11 | INFO  | Task fccaece7-f16a-4b63-909c-fd8412342af3 is in state STARTED 2025-10-08 15:43:11.871959 | orchestrator | 2025-10-08 15:43:11 | INFO  | Task e53a9322-0e67-4e51-805e-3cd7852832cc is in state STARTED 2025-10-08 15:43:11.872784 | orchestrator | 2025-10-08 15:43:11 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:43:11.874723 | orchestrator | 2025-10-08 15:43:11 | INFO  | Task 41436c97-5f8c-4e16-b6f7-e26cf48fe2a9 is in state STARTED 2025-10-08 15:43:11.874747 | orchestrator | 2025-10-08 15:43:11 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:43:14.923610 | orchestrator | 2025-10-08 15:43:14 | INFO  | Task fccaece7-f16a-4b63-909c-fd8412342af3 is in state STARTED 2025-10-08 15:43:14.923729 | orchestrator | 2025-10-08 15:43:14 | INFO  | Task e53a9322-0e67-4e51-805e-3cd7852832cc is in state STARTED 2025-10-08 15:43:14.923744 | orchestrator | 2025-10-08 15:43:14 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:43:14.923756 | orchestrator | 2025-10-08 15:43:14 | INFO  | Task 41436c97-5f8c-4e16-b6f7-e26cf48fe2a9 is in state STARTED 2025-10-08 15:43:14.923767 | orchestrator | 2025-10-08 15:43:14 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:43:17.963722 | orchestrator | 2025-10-08 15:43:17 | INFO  | Task fccaece7-f16a-4b63-909c-fd8412342af3 is in state STARTED 2025-10-08 15:43:17.964523 | orchestrator | 2025-10-08 15:43:17 | INFO  | Task e53a9322-0e67-4e51-805e-3cd7852832cc is in state STARTED 2025-10-08 15:43:17.968455 | orchestrator | 2025-10-08 15:43:17 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:43:17.970519 | orchestrator | 2025-10-08 15:43:17 | INFO  | Task 41436c97-5f8c-4e16-b6f7-e26cf48fe2a9 is in state STARTED 2025-10-08 15:43:17.970544 | orchestrator | 2025-10-08 15:43:17 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:43:21.021449 | orchestrator | 2025-10-08 15:43:21 | INFO  | Task fccaece7-f16a-4b63-909c-fd8412342af3 is in state STARTED 2025-10-08 15:43:21.021525 | orchestrator | 2025-10-08 15:43:21 | INFO  | Task e53a9322-0e67-4e51-805e-3cd7852832cc is in state STARTED 2025-10-08 15:43:21.021539 | orchestrator | 2025-10-08 15:43:21 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:43:21.021550 | orchestrator | 2025-10-08 15:43:21 | INFO  | Task 41436c97-5f8c-4e16-b6f7-e26cf48fe2a9 is in state STARTED 2025-10-08 15:43:21.021562 | orchestrator | 2025-10-08 15:43:21 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:43:24.076311 | orchestrator | 2025-10-08 15:43:24 | INFO  | Task fccaece7-f16a-4b63-909c-fd8412342af3 is in state STARTED 2025-10-08 15:43:24.077537 | orchestrator | 2025-10-08 15:43:24 | INFO  | Task e53a9322-0e67-4e51-805e-3cd7852832cc is in state STARTED 2025-10-08 15:43:24.078579 | orchestrator | 2025-10-08 15:43:24 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:43:24.079488 | orchestrator | 2025-10-08 15:43:24 | INFO  | Task 41436c97-5f8c-4e16-b6f7-e26cf48fe2a9 is in state STARTED 2025-10-08 15:43:24.079511 | orchestrator | 2025-10-08 15:43:24 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:43:27.119726 | orchestrator | 2025-10-08 15:43:27 | INFO  | Task fccaece7-f16a-4b63-909c-fd8412342af3 is in state STARTED 2025-10-08 15:43:27.119808 | orchestrator | 2025-10-08 15:43:27 | INFO  | Task e53a9322-0e67-4e51-805e-3cd7852832cc is in state STARTED 2025-10-08 15:43:27.121697 | orchestrator | 2025-10-08 15:43:27 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:43:27.123222 | orchestrator | 2025-10-08 15:43:27 | INFO  | Task 41436c97-5f8c-4e16-b6f7-e26cf48fe2a9 is in state STARTED 2025-10-08 15:43:27.123246 | orchestrator | 2025-10-08 15:43:27 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:43:30.155834 | orchestrator | 2025-10-08 15:43:30 | INFO  | Task fccaece7-f16a-4b63-909c-fd8412342af3 is in state STARTED 2025-10-08 15:43:30.158744 | orchestrator | 2025-10-08 15:43:30 | INFO  | Task e53a9322-0e67-4e51-805e-3cd7852832cc is in state STARTED 2025-10-08 15:43:30.160537 | orchestrator | 2025-10-08 15:43:30 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:43:30.162286 | orchestrator | 2025-10-08 15:43:30 | INFO  | Task 41436c97-5f8c-4e16-b6f7-e26cf48fe2a9 is in state STARTED 2025-10-08 15:43:30.162340 | orchestrator | 2025-10-08 15:43:30 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:43:33.201236 | orchestrator | 2025-10-08 15:43:33 | INFO  | Task fccaece7-f16a-4b63-909c-fd8412342af3 is in state STARTED 2025-10-08 15:43:33.203748 | orchestrator | 2025-10-08 15:43:33 | INFO  | Task e53a9322-0e67-4e51-805e-3cd7852832cc is in state STARTED 2025-10-08 15:43:33.204955 | orchestrator | 2025-10-08 15:43:33 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:43:33.205506 | orchestrator | 2025-10-08 15:43:33 | INFO  | Task 41436c97-5f8c-4e16-b6f7-e26cf48fe2a9 is in state STARTED 2025-10-08 15:43:33.205678 | orchestrator | 2025-10-08 15:43:33 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:43:36.238855 | orchestrator | 2025-10-08 15:43:36 | INFO  | Task fccaece7-f16a-4b63-909c-fd8412342af3 is in state STARTED 2025-10-08 15:43:36.239275 | orchestrator | 2025-10-08 15:43:36 | INFO  | Task e53a9322-0e67-4e51-805e-3cd7852832cc is in state STARTED 2025-10-08 15:43:36.239903 | orchestrator | 2025-10-08 15:43:36 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:43:36.240794 | orchestrator | 2025-10-08 15:43:36 | INFO  | Task 41436c97-5f8c-4e16-b6f7-e26cf48fe2a9 is in state STARTED 2025-10-08 15:43:36.240817 | orchestrator | 2025-10-08 15:43:36 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:43:39.290334 | orchestrator | 2025-10-08 15:43:39 | INFO  | Task fccaece7-f16a-4b63-909c-fd8412342af3 is in state STARTED 2025-10-08 15:43:39.293196 | orchestrator | 2025-10-08 15:43:39 | INFO  | Task e53a9322-0e67-4e51-805e-3cd7852832cc is in state STARTED 2025-10-08 15:43:39.294945 | orchestrator | 2025-10-08 15:43:39 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:43:39.295476 | orchestrator | 2025-10-08 15:43:39 | INFO  | Task 41436c97-5f8c-4e16-b6f7-e26cf48fe2a9 is in state STARTED 2025-10-08 15:43:39.295507 | orchestrator | 2025-10-08 15:43:39 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:43:42.333939 | orchestrator | 2025-10-08 15:43:42 | INFO  | Task fccaece7-f16a-4b63-909c-fd8412342af3 is in state STARTED 2025-10-08 15:43:42.334486 | orchestrator | 2025-10-08 15:43:42 | INFO  | Task e53a9322-0e67-4e51-805e-3cd7852832cc is in state STARTED 2025-10-08 15:43:42.335922 | orchestrator | 2025-10-08 15:43:42 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:43:42.336654 | orchestrator | 2025-10-08 15:43:42 | INFO  | Task 41436c97-5f8c-4e16-b6f7-e26cf48fe2a9 is in state STARTED 2025-10-08 15:43:42.337038 | orchestrator | 2025-10-08 15:43:42 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:43:45.379501 | orchestrator | 2025-10-08 15:43:45 | INFO  | Task fccaece7-f16a-4b63-909c-fd8412342af3 is in state STARTED 2025-10-08 15:43:45.380836 | orchestrator | 2025-10-08 15:43:45 | INFO  | Task e53a9322-0e67-4e51-805e-3cd7852832cc is in state STARTED 2025-10-08 15:43:45.382621 | orchestrator | 2025-10-08 15:43:45 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:43:45.384342 | orchestrator | 2025-10-08 15:43:45 | INFO  | Task 41436c97-5f8c-4e16-b6f7-e26cf48fe2a9 is in state STARTED 2025-10-08 15:43:45.384364 | orchestrator | 2025-10-08 15:43:45 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:43:48.430751 | orchestrator | 2025-10-08 15:43:48 | INFO  | Task fccaece7-f16a-4b63-909c-fd8412342af3 is in state STARTED 2025-10-08 15:43:48.433029 | orchestrator | 2025-10-08 15:43:48 | INFO  | Task e53a9322-0e67-4e51-805e-3cd7852832cc is in state STARTED 2025-10-08 15:43:48.437300 | orchestrator | 2025-10-08 15:43:48 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:43:48.441054 | orchestrator | 2025-10-08 15:43:48 | INFO  | Task 41436c97-5f8c-4e16-b6f7-e26cf48fe2a9 is in state STARTED 2025-10-08 15:43:48.441103 | orchestrator | 2025-10-08 15:43:48 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:43:51.491706 | orchestrator | 2025-10-08 15:43:51 | INFO  | Task fccaece7-f16a-4b63-909c-fd8412342af3 is in state STARTED 2025-10-08 15:43:51.492163 | orchestrator | 2025-10-08 15:43:51 | INFO  | Task e53a9322-0e67-4e51-805e-3cd7852832cc is in state STARTED 2025-10-08 15:43:51.493388 | orchestrator | 2025-10-08 15:43:51 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:43:51.494172 | orchestrator | 2025-10-08 15:43:51 | INFO  | Task 41436c97-5f8c-4e16-b6f7-e26cf48fe2a9 is in state STARTED 2025-10-08 15:43:51.494198 | orchestrator | 2025-10-08 15:43:51 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:43:54.535052 | orchestrator | 2025-10-08 15:43:54 | INFO  | Task fccaece7-f16a-4b63-909c-fd8412342af3 is in state STARTED 2025-10-08 15:43:54.535481 | orchestrator | 2025-10-08 15:43:54 | INFO  | Task e53a9322-0e67-4e51-805e-3cd7852832cc is in state STARTED 2025-10-08 15:43:54.536957 | orchestrator | 2025-10-08 15:43:54 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:43:54.538061 | orchestrator | 2025-10-08 15:43:54 | INFO  | Task 41436c97-5f8c-4e16-b6f7-e26cf48fe2a9 is in state STARTED 2025-10-08 15:43:54.538107 | orchestrator | 2025-10-08 15:43:54 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:43:57.576211 | orchestrator | 2025-10-08 15:43:57 | INFO  | Task fccaece7-f16a-4b63-909c-fd8412342af3 is in state STARTED 2025-10-08 15:43:57.576812 | orchestrator | 2025-10-08 15:43:57 | INFO  | Task e53a9322-0e67-4e51-805e-3cd7852832cc is in state STARTED 2025-10-08 15:43:57.578312 | orchestrator | 2025-10-08 15:43:57 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:43:57.580503 | orchestrator | 2025-10-08 15:43:57 | INFO  | Task 41436c97-5f8c-4e16-b6f7-e26cf48fe2a9 is in state STARTED 2025-10-08 15:43:57.580540 | orchestrator | 2025-10-08 15:43:57 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:44:00.618801 | orchestrator | 2025-10-08 15:44:00 | INFO  | Task fccaece7-f16a-4b63-909c-fd8412342af3 is in state STARTED 2025-10-08 15:44:00.620793 | orchestrator | 2025-10-08 15:44:00 | INFO  | Task e53a9322-0e67-4e51-805e-3cd7852832cc is in state STARTED 2025-10-08 15:44:00.623660 | orchestrator | 2025-10-08 15:44:00 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:44:00.626391 | orchestrator | 2025-10-08 15:44:00 | INFO  | Task 41436c97-5f8c-4e16-b6f7-e26cf48fe2a9 is in state STARTED 2025-10-08 15:44:00.627027 | orchestrator | 2025-10-08 15:44:00 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:44:03.667540 | orchestrator | 2025-10-08 15:44:03 | INFO  | Task fccaece7-f16a-4b63-909c-fd8412342af3 is in state STARTED 2025-10-08 15:44:03.669047 | orchestrator | 2025-10-08 15:44:03 | INFO  | Task e53a9322-0e67-4e51-805e-3cd7852832cc is in state STARTED 2025-10-08 15:44:03.669884 | orchestrator | 2025-10-08 15:44:03 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:44:03.670763 | orchestrator | 2025-10-08 15:44:03 | INFO  | Task 41436c97-5f8c-4e16-b6f7-e26cf48fe2a9 is in state STARTED 2025-10-08 15:44:03.670805 | orchestrator | 2025-10-08 15:44:03 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:44:06.703449 | orchestrator | 2025-10-08 15:44:06 | INFO  | Task fccaece7-f16a-4b63-909c-fd8412342af3 is in state STARTED 2025-10-08 15:44:06.706893 | orchestrator | 2025-10-08 15:44:06.706939 | orchestrator | 2025-10-08 15:44:06.706953 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2025-10-08 15:44:06.706965 | orchestrator | 2025-10-08 15:44:06.706976 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-10-08 15:44:06.706988 | orchestrator | Wednesday 08 October 2025 15:42:39 +0000 (0:00:00.341) 0:00:00.341 ***** 2025-10-08 15:44:06.707000 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-10-08 15:44:06.707011 | orchestrator | 2025-10-08 15:44:06.707022 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-10-08 15:44:06.707033 | orchestrator | Wednesday 08 October 2025 15:42:40 +0000 (0:00:00.609) 0:00:00.951 ***** 2025-10-08 15:44:06.707044 | orchestrator | changed: [testbed-manager] 2025-10-08 15:44:06.707056 | orchestrator | 2025-10-08 15:44:06.707067 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2025-10-08 15:44:06.707078 | orchestrator | Wednesday 08 October 2025 15:42:41 +0000 (0:00:01.301) 0:00:02.252 ***** 2025-10-08 15:44:06.707119 | orchestrator | changed: [testbed-manager] 2025-10-08 15:44:06.707129 | orchestrator | 2025-10-08 15:44:06.707140 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-08 15:44:06.707152 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-08 15:44:06.707165 | orchestrator | 2025-10-08 15:44:06.707176 | orchestrator | 2025-10-08 15:44:06.707187 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-08 15:44:06.707198 | orchestrator | Wednesday 08 October 2025 15:42:42 +0000 (0:00:00.501) 0:00:02.753 ***** 2025-10-08 15:44:06.707209 | orchestrator | =============================================================================== 2025-10-08 15:44:06.707220 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.30s 2025-10-08 15:44:06.707230 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.61s 2025-10-08 15:44:06.707241 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.50s 2025-10-08 15:44:06.707252 | orchestrator | 2025-10-08 15:44:06.707263 | orchestrator | 2025-10-08 15:44:06.707274 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-10-08 15:44:06.707284 | orchestrator | 2025-10-08 15:44:06.707295 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-10-08 15:44:06.707306 | orchestrator | Wednesday 08 October 2025 15:42:38 +0000 (0:00:00.123) 0:00:00.123 ***** 2025-10-08 15:44:06.707317 | orchestrator | ok: [testbed-manager] 2025-10-08 15:44:06.707329 | orchestrator | 2025-10-08 15:44:06.707340 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-10-08 15:44:06.707350 | orchestrator | Wednesday 08 October 2025 15:42:39 +0000 (0:00:00.555) 0:00:00.679 ***** 2025-10-08 15:44:06.707362 | orchestrator | ok: [testbed-manager] 2025-10-08 15:44:06.707373 | orchestrator | 2025-10-08 15:44:06.707384 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-10-08 15:44:06.707395 | orchestrator | Wednesday 08 October 2025 15:42:39 +0000 (0:00:00.509) 0:00:01.188 ***** 2025-10-08 15:44:06.707406 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-10-08 15:44:06.707417 | orchestrator | 2025-10-08 15:44:06.707428 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-10-08 15:44:06.707439 | orchestrator | Wednesday 08 October 2025 15:42:40 +0000 (0:00:00.634) 0:00:01.823 ***** 2025-10-08 15:44:06.707449 | orchestrator | changed: [testbed-manager] 2025-10-08 15:44:06.707460 | orchestrator | 2025-10-08 15:44:06.707471 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-10-08 15:44:06.707497 | orchestrator | Wednesday 08 October 2025 15:42:41 +0000 (0:00:01.679) 0:00:03.503 ***** 2025-10-08 15:44:06.707511 | orchestrator | changed: [testbed-manager] 2025-10-08 15:44:06.707523 | orchestrator | 2025-10-08 15:44:06.707535 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-10-08 15:44:06.707563 | orchestrator | Wednesday 08 October 2025 15:42:42 +0000 (0:00:00.499) 0:00:04.003 ***** 2025-10-08 15:44:06.707576 | orchestrator | changed: [testbed-manager -> localhost] 2025-10-08 15:44:06.707588 | orchestrator | 2025-10-08 15:44:06.707600 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-10-08 15:44:06.707612 | orchestrator | Wednesday 08 October 2025 15:42:43 +0000 (0:00:01.354) 0:00:05.357 ***** 2025-10-08 15:44:06.707625 | orchestrator | changed: [testbed-manager -> localhost] 2025-10-08 15:44:06.707637 | orchestrator | 2025-10-08 15:44:06.707649 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-10-08 15:44:06.707661 | orchestrator | Wednesday 08 October 2025 15:42:44 +0000 (0:00:00.769) 0:00:06.126 ***** 2025-10-08 15:44:06.707673 | orchestrator | ok: [testbed-manager] 2025-10-08 15:44:06.707685 | orchestrator | 2025-10-08 15:44:06.707698 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-10-08 15:44:06.707710 | orchestrator | Wednesday 08 October 2025 15:42:44 +0000 (0:00:00.397) 0:00:06.524 ***** 2025-10-08 15:44:06.707722 | orchestrator | ok: [testbed-manager] 2025-10-08 15:44:06.707735 | orchestrator | 2025-10-08 15:44:06.707747 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-08 15:44:06.707760 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-08 15:44:06.707773 | orchestrator | 2025-10-08 15:44:06.707785 | orchestrator | 2025-10-08 15:44:06.707797 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-08 15:44:06.707810 | orchestrator | Wednesday 08 October 2025 15:42:45 +0000 (0:00:00.307) 0:00:06.832 ***** 2025-10-08 15:44:06.707821 | orchestrator | =============================================================================== 2025-10-08 15:44:06.707832 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.68s 2025-10-08 15:44:06.707843 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.35s 2025-10-08 15:44:06.707853 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 0.77s 2025-10-08 15:44:06.707878 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.63s 2025-10-08 15:44:06.707890 | orchestrator | Get home directory of operator user ------------------------------------- 0.56s 2025-10-08 15:44:06.707901 | orchestrator | Create .kube directory -------------------------------------------------- 0.51s 2025-10-08 15:44:06.707912 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.50s 2025-10-08 15:44:06.707922 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.40s 2025-10-08 15:44:06.707933 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.31s 2025-10-08 15:44:06.707944 | orchestrator | 2025-10-08 15:44:06.707955 | orchestrator | 2025-10-08 15:44:06.707965 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2025-10-08 15:44:06.707976 | orchestrator | 2025-10-08 15:44:06.707987 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-10-08 15:44:06.707998 | orchestrator | Wednesday 08 October 2025 15:41:43 +0000 (0:00:00.194) 0:00:00.194 ***** 2025-10-08 15:44:06.708008 | orchestrator | ok: [localhost] => { 2025-10-08 15:44:06.708020 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2025-10-08 15:44:06.708031 | orchestrator | } 2025-10-08 15:44:06.708042 | orchestrator | 2025-10-08 15:44:06.708053 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2025-10-08 15:44:06.708064 | orchestrator | Wednesday 08 October 2025 15:41:43 +0000 (0:00:00.041) 0:00:00.235 ***** 2025-10-08 15:44:06.708076 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2025-10-08 15:44:06.708107 | orchestrator | ...ignoring 2025-10-08 15:44:06.708127 | orchestrator | 2025-10-08 15:44:06.708138 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2025-10-08 15:44:06.708149 | orchestrator | Wednesday 08 October 2025 15:41:46 +0000 (0:00:02.961) 0:00:03.197 ***** 2025-10-08 15:44:06.708160 | orchestrator | skipping: [localhost] 2025-10-08 15:44:06.708171 | orchestrator | 2025-10-08 15:44:06.708182 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2025-10-08 15:44:06.708193 | orchestrator | Wednesday 08 October 2025 15:41:47 +0000 (0:00:00.206) 0:00:03.403 ***** 2025-10-08 15:44:06.708204 | orchestrator | ok: [localhost] 2025-10-08 15:44:06.708215 | orchestrator | 2025-10-08 15:44:06.708226 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-10-08 15:44:06.708237 | orchestrator | 2025-10-08 15:44:06.708247 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-10-08 15:44:06.708258 | orchestrator | Wednesday 08 October 2025 15:41:47 +0000 (0:00:00.402) 0:00:03.805 ***** 2025-10-08 15:44:06.708269 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:44:06.708280 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:44:06.708292 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:44:06.708302 | orchestrator | 2025-10-08 15:44:06.708313 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-10-08 15:44:06.708324 | orchestrator | Wednesday 08 October 2025 15:41:48 +0000 (0:00:00.740) 0:00:04.546 ***** 2025-10-08 15:44:06.708335 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2025-10-08 15:44:06.708346 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2025-10-08 15:44:06.708357 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2025-10-08 15:44:06.708368 | orchestrator | 2025-10-08 15:44:06.708379 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2025-10-08 15:44:06.708390 | orchestrator | 2025-10-08 15:44:06.708406 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-10-08 15:44:06.708418 | orchestrator | Wednesday 08 October 2025 15:41:48 +0000 (0:00:00.701) 0:00:05.247 ***** 2025-10-08 15:44:06.708429 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-08 15:44:06.708440 | orchestrator | 2025-10-08 15:44:06.708451 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-10-08 15:44:06.708462 | orchestrator | Wednesday 08 October 2025 15:41:49 +0000 (0:00:00.684) 0:00:05.931 ***** 2025-10-08 15:44:06.708473 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:44:06.708484 | orchestrator | 2025-10-08 15:44:06.708494 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2025-10-08 15:44:06.708506 | orchestrator | Wednesday 08 October 2025 15:41:50 +0000 (0:00:01.007) 0:00:06.939 ***** 2025-10-08 15:44:06.708517 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:44:06.708528 | orchestrator | 2025-10-08 15:44:06.708538 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2025-10-08 15:44:06.708549 | orchestrator | Wednesday 08 October 2025 15:41:50 +0000 (0:00:00.318) 0:00:07.258 ***** 2025-10-08 15:44:06.708560 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:44:06.708571 | orchestrator | 2025-10-08 15:44:06.708582 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2025-10-08 15:44:06.708593 | orchestrator | Wednesday 08 October 2025 15:41:51 +0000 (0:00:00.460) 0:00:07.718 ***** 2025-10-08 15:44:06.708604 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:44:06.708614 | orchestrator | 2025-10-08 15:44:06.708625 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2025-10-08 15:44:06.708636 | orchestrator | Wednesday 08 October 2025 15:41:51 +0000 (0:00:00.453) 0:00:08.171 ***** 2025-10-08 15:44:06.708647 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:44:06.708658 | orchestrator | 2025-10-08 15:44:06.708668 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-10-08 15:44:06.708679 | orchestrator | Wednesday 08 October 2025 15:41:52 +0000 (0:00:00.566) 0:00:08.738 ***** 2025-10-08 15:44:06.708697 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-08 15:44:06.708708 | orchestrator | 2025-10-08 15:44:06.708719 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-10-08 15:44:06.708737 | orchestrator | Wednesday 08 October 2025 15:41:53 +0000 (0:00:00.877) 0:00:09.615 ***** 2025-10-08 15:44:06.708748 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:44:06.708759 | orchestrator | 2025-10-08 15:44:06.708770 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2025-10-08 15:44:06.708781 | orchestrator | Wednesday 08 October 2025 15:41:54 +0000 (0:00:00.860) 0:00:10.476 ***** 2025-10-08 15:44:06.708792 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:44:06.708803 | orchestrator | 2025-10-08 15:44:06.708814 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2025-10-08 15:44:06.708824 | orchestrator | Wednesday 08 October 2025 15:41:54 +0000 (0:00:00.410) 0:00:10.886 ***** 2025-10-08 15:44:06.708835 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:44:06.708846 | orchestrator | 2025-10-08 15:44:06.708857 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2025-10-08 15:44:06.708868 | orchestrator | Wednesday 08 October 2025 15:41:55 +0000 (0:00:01.074) 0:00:11.961 ***** 2025-10-08 15:44:06.708884 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-10-08 15:44:06.708906 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-10-08 15:44:06.708921 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-10-08 15:44:06.708940 | orchestrator | 2025-10-08 15:44:06.708951 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2025-10-08 15:44:06.708962 | orchestrator | Wednesday 08 October 2025 15:41:56 +0000 (0:00:01.341) 0:00:13.302 ***** 2025-10-08 15:44:06.708982 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-10-08 15:44:06.708996 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-10-08 15:44:06.709013 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-10-08 15:44:06.709042 | orchestrator | 2025-10-08 15:44:06.709054 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2025-10-08 15:44:06.709097 | orchestrator | Wednesday 08 October 2025 15:42:00 +0000 (0:00:03.588) 0:00:16.891 ***** 2025-10-08 15:44:06.709108 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-10-08 15:44:06.709120 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-10-08 15:44:06.709131 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-10-08 15:44:06.709142 | orchestrator | 2025-10-08 15:44:06.709152 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2025-10-08 15:44:06.709163 | orchestrator | Wednesday 08 October 2025 15:42:02 +0000 (0:00:02.120) 0:00:19.011 ***** 2025-10-08 15:44:06.709174 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-10-08 15:44:06.709185 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-10-08 15:44:06.709196 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-10-08 15:44:06.709207 | orchestrator | 2025-10-08 15:44:06.709218 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2025-10-08 15:44:06.709236 | orchestrator | Wednesday 08 October 2025 15:42:04 +0000 (0:00:02.154) 0:00:21.166 ***** 2025-10-08 15:44:06.709247 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-10-08 15:44:06.709258 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-10-08 15:44:06.709269 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-10-08 15:44:06.709280 | orchestrator | 2025-10-08 15:44:06.709291 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2025-10-08 15:44:06.709302 | orchestrator | Wednesday 08 October 2025 15:42:06 +0000 (0:00:01.385) 0:00:22.552 ***** 2025-10-08 15:44:06.709313 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-10-08 15:44:06.709324 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-10-08 15:44:06.709335 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-10-08 15:44:06.709346 | orchestrator | 2025-10-08 15:44:06.709357 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2025-10-08 15:44:06.709368 | orchestrator | Wednesday 08 October 2025 15:42:09 +0000 (0:00:03.036) 0:00:25.589 ***** 2025-10-08 15:44:06.709379 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-10-08 15:44:06.709390 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-10-08 15:44:06.709401 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-10-08 15:44:06.709412 | orchestrator | 2025-10-08 15:44:06.709423 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2025-10-08 15:44:06.709434 | orchestrator | Wednesday 08 October 2025 15:42:11 +0000 (0:00:02.536) 0:00:28.125 ***** 2025-10-08 15:44:06.709445 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-10-08 15:44:06.709456 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-10-08 15:44:06.709467 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-10-08 15:44:06.709478 | orchestrator | 2025-10-08 15:44:06.709489 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-10-08 15:44:06.709500 | orchestrator | Wednesday 08 October 2025 15:42:13 +0000 (0:00:01.999) 0:00:30.125 ***** 2025-10-08 15:44:06.709511 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:44:06.709522 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:44:06.709540 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:44:06.709594 | orchestrator | 2025-10-08 15:44:06.709632 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2025-10-08 15:44:06.709643 | orchestrator | Wednesday 08 October 2025 15:42:14 +0000 (0:00:00.471) 0:00:30.596 ***** 2025-10-08 15:44:06.709661 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-10-08 15:44:06.709683 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-10-08 15:44:06.709696 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-10-08 15:44:06.709708 | orchestrator | 2025-10-08 15:44:06.709719 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2025-10-08 15:44:06.709731 | orchestrator | Wednesday 08 October 2025 15:42:15 +0000 (0:00:01.548) 0:00:32.144 ***** 2025-10-08 15:44:06.709741 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:44:06.709753 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:44:06.709763 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:44:06.709774 | orchestrator | 2025-10-08 15:44:06.709785 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2025-10-08 15:44:06.709811 | orchestrator | Wednesday 08 October 2025 15:42:16 +0000 (0:00:00.896) 0:00:33.040 ***** 2025-10-08 15:44:06.709823 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:44:06.709834 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:44:06.709845 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:44:06.709855 | orchestrator | 2025-10-08 15:44:06.709867 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2025-10-08 15:44:06.709878 | orchestrator | Wednesday 08 October 2025 15:42:25 +0000 (0:00:08.456) 0:00:41.497 ***** 2025-10-08 15:44:06.709888 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:44:06.709899 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:44:06.709911 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:44:06.709921 | orchestrator | 2025-10-08 15:44:06.709932 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-10-08 15:44:06.709943 | orchestrator | 2025-10-08 15:44:06.709954 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-10-08 15:44:06.709965 | orchestrator | Wednesday 08 October 2025 15:42:25 +0000 (0:00:00.278) 0:00:41.775 ***** 2025-10-08 15:44:06.709976 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:44:06.709987 | orchestrator | 2025-10-08 15:44:06.709998 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-10-08 15:44:06.710014 | orchestrator | Wednesday 08 October 2025 15:42:26 +0000 (0:00:00.783) 0:00:42.559 ***** 2025-10-08 15:44:06.710072 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:44:06.710115 | orchestrator | 2025-10-08 15:44:06.710127 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-10-08 15:44:06.710138 | orchestrator | Wednesday 08 October 2025 15:42:26 +0000 (0:00:00.426) 0:00:42.985 ***** 2025-10-08 15:44:06.710149 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:44:06.710159 | orchestrator | 2025-10-08 15:44:06.710170 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-10-08 15:44:06.710182 | orchestrator | Wednesday 08 October 2025 15:42:28 +0000 (0:00:01.574) 0:00:44.560 ***** 2025-10-08 15:44:06.710192 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:44:06.710203 | orchestrator | 2025-10-08 15:44:06.710214 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-10-08 15:44:06.710225 | orchestrator | 2025-10-08 15:44:06.710236 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-10-08 15:44:06.710247 | orchestrator | Wednesday 08 October 2025 15:43:23 +0000 (0:00:55.546) 0:01:40.106 ***** 2025-10-08 15:44:06.710258 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:44:06.710269 | orchestrator | 2025-10-08 15:44:06.710279 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-10-08 15:44:06.710290 | orchestrator | Wednesday 08 October 2025 15:43:24 +0000 (0:00:01.052) 0:01:41.159 ***** 2025-10-08 15:44:06.710301 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:44:06.710312 | orchestrator | 2025-10-08 15:44:06.710323 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-10-08 15:44:06.710334 | orchestrator | Wednesday 08 October 2025 15:43:25 +0000 (0:00:00.273) 0:01:41.432 ***** 2025-10-08 15:44:06.710345 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:44:06.710355 | orchestrator | 2025-10-08 15:44:06.710366 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-10-08 15:44:06.710377 | orchestrator | Wednesday 08 October 2025 15:43:32 +0000 (0:00:07.020) 0:01:48.453 ***** 2025-10-08 15:44:06.710388 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:44:06.710399 | orchestrator | 2025-10-08 15:44:06.710410 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-10-08 15:44:06.710421 | orchestrator | 2025-10-08 15:44:06.710432 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-10-08 15:44:06.710442 | orchestrator | Wednesday 08 October 2025 15:43:43 +0000 (0:00:11.159) 0:01:59.613 ***** 2025-10-08 15:44:06.710453 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:44:06.710464 | orchestrator | 2025-10-08 15:44:06.710490 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-10-08 15:44:06.710501 | orchestrator | Wednesday 08 October 2025 15:43:43 +0000 (0:00:00.683) 0:02:00.296 ***** 2025-10-08 15:44:06.710512 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:44:06.710523 | orchestrator | 2025-10-08 15:44:06.710534 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-10-08 15:44:06.710545 | orchestrator | Wednesday 08 October 2025 15:43:44 +0000 (0:00:00.485) 0:02:00.781 ***** 2025-10-08 15:44:06.710555 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:44:06.710566 | orchestrator | 2025-10-08 15:44:06.710577 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-10-08 15:44:06.710588 | orchestrator | Wednesday 08 October 2025 15:43:51 +0000 (0:00:07.059) 0:02:07.841 ***** 2025-10-08 15:44:06.710599 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:44:06.710610 | orchestrator | 2025-10-08 15:44:06.710621 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2025-10-08 15:44:06.710632 | orchestrator | 2025-10-08 15:44:06.710643 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2025-10-08 15:44:06.710654 | orchestrator | Wednesday 08 October 2025 15:44:01 +0000 (0:00:09.619) 0:02:17.460 ***** 2025-10-08 15:44:06.710665 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-08 15:44:06.710676 | orchestrator | 2025-10-08 15:44:06.710687 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2025-10-08 15:44:06.710698 | orchestrator | Wednesday 08 October 2025 15:44:01 +0000 (0:00:00.779) 0:02:18.240 ***** 2025-10-08 15:44:06.710709 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-10-08 15:44:06.710720 | orchestrator | enable_outward_rabbitmq_True 2025-10-08 15:44:06.710731 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-10-08 15:44:06.710741 | orchestrator | outward_rabbitmq_restart 2025-10-08 15:44:06.710753 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:44:06.710764 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:44:06.710774 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:44:06.710785 | orchestrator | 2025-10-08 15:44:06.710796 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2025-10-08 15:44:06.710807 | orchestrator | skipping: no hosts matched 2025-10-08 15:44:06.710818 | orchestrator | 2025-10-08 15:44:06.710828 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2025-10-08 15:44:06.710839 | orchestrator | skipping: no hosts matched 2025-10-08 15:44:06.710850 | orchestrator | 2025-10-08 15:44:06.710861 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2025-10-08 15:44:06.710872 | orchestrator | skipping: no hosts matched 2025-10-08 15:44:06.710883 | orchestrator | 2025-10-08 15:44:06.710894 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-08 15:44:06.710905 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-10-08 15:44:06.710916 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-10-08 15:44:06.710928 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-08 15:44:06.710944 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-08 15:44:06.710956 | orchestrator | 2025-10-08 15:44:06.710967 | orchestrator | 2025-10-08 15:44:06.710977 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-08 15:44:06.710988 | orchestrator | Wednesday 08 October 2025 15:44:04 +0000 (0:00:02.433) 0:02:20.674 ***** 2025-10-08 15:44:06.710999 | orchestrator | =============================================================================== 2025-10-08 15:44:06.711016 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 76.32s 2025-10-08 15:44:06.711027 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 15.66s 2025-10-08 15:44:06.711038 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 8.46s 2025-10-08 15:44:06.711049 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 3.59s 2025-10-08 15:44:06.711060 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 3.04s 2025-10-08 15:44:06.711071 | orchestrator | Check RabbitMQ service -------------------------------------------------- 2.96s 2025-10-08 15:44:06.711102 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 2.54s 2025-10-08 15:44:06.711114 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 2.52s 2025-10-08 15:44:06.711125 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.43s 2025-10-08 15:44:06.711136 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 2.15s 2025-10-08 15:44:06.711147 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 2.12s 2025-10-08 15:44:06.711158 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 2.00s 2025-10-08 15:44:06.711169 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.55s 2025-10-08 15:44:06.711179 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.39s 2025-10-08 15:44:06.711190 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 1.34s 2025-10-08 15:44:06.711201 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode ---------------------- 1.19s 2025-10-08 15:44:06.711212 | orchestrator | rabbitmq : Remove ha-all policy from RabbitMQ --------------------------- 1.07s 2025-10-08 15:44:06.711229 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.01s 2025-10-08 15:44:06.711240 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 0.90s 2025-10-08 15:44:06.711251 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 0.88s 2025-10-08 15:44:06.711262 | orchestrator | 2025-10-08 15:44:06 | INFO  | Task e53a9322-0e67-4e51-805e-3cd7852832cc is in state SUCCESS 2025-10-08 15:44:06.711274 | orchestrator | 2025-10-08 15:44:06 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:44:06.711285 | orchestrator | 2025-10-08 15:44:06 | INFO  | Task 41436c97-5f8c-4e16-b6f7-e26cf48fe2a9 is in state STARTED 2025-10-08 15:44:06.711296 | orchestrator | 2025-10-08 15:44:06 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:44:09.746372 | orchestrator | 2025-10-08 15:44:09 | INFO  | Task fccaece7-f16a-4b63-909c-fd8412342af3 is in state STARTED 2025-10-08 15:44:09.747912 | orchestrator | 2025-10-08 15:44:09 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:44:09.748496 | orchestrator | 2025-10-08 15:44:09 | INFO  | Task 41436c97-5f8c-4e16-b6f7-e26cf48fe2a9 is in state STARTED 2025-10-08 15:44:09.750357 | orchestrator | 2025-10-08 15:44:09 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:44:12.789005 | orchestrator | 2025-10-08 15:44:12 | INFO  | Task fccaece7-f16a-4b63-909c-fd8412342af3 is in state STARTED 2025-10-08 15:44:12.789898 | orchestrator | 2025-10-08 15:44:12 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:44:12.790663 | orchestrator | 2025-10-08 15:44:12 | INFO  | Task 41436c97-5f8c-4e16-b6f7-e26cf48fe2a9 is in state STARTED 2025-10-08 15:44:12.790681 | orchestrator | 2025-10-08 15:44:12 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:44:15.830942 | orchestrator | 2025-10-08 15:44:15 | INFO  | Task fccaece7-f16a-4b63-909c-fd8412342af3 is in state STARTED 2025-10-08 15:44:15.831422 | orchestrator | 2025-10-08 15:44:15 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:44:15.831878 | orchestrator | 2025-10-08 15:44:15 | INFO  | Task 41436c97-5f8c-4e16-b6f7-e26cf48fe2a9 is in state STARTED 2025-10-08 15:44:15.831900 | orchestrator | 2025-10-08 15:44:15 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:44:18.878505 | orchestrator | 2025-10-08 15:44:18 | INFO  | Task fccaece7-f16a-4b63-909c-fd8412342af3 is in state STARTED 2025-10-08 15:44:18.878587 | orchestrator | 2025-10-08 15:44:18 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:44:18.878602 | orchestrator | 2025-10-08 15:44:18 | INFO  | Task 41436c97-5f8c-4e16-b6f7-e26cf48fe2a9 is in state STARTED 2025-10-08 15:44:18.878634 | orchestrator | 2025-10-08 15:44:18 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:44:21.925710 | orchestrator | 2025-10-08 15:44:21 | INFO  | Task fccaece7-f16a-4b63-909c-fd8412342af3 is in state STARTED 2025-10-08 15:44:21.928549 | orchestrator | 2025-10-08 15:44:21 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:44:21.932607 | orchestrator | 2025-10-08 15:44:21 | INFO  | Task 41436c97-5f8c-4e16-b6f7-e26cf48fe2a9 is in state STARTED 2025-10-08 15:44:21.932653 | orchestrator | 2025-10-08 15:44:21 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:44:25.009623 | orchestrator | 2025-10-08 15:44:25 | INFO  | Task fccaece7-f16a-4b63-909c-fd8412342af3 is in state STARTED 2025-10-08 15:44:25.009720 | orchestrator | 2025-10-08 15:44:25 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:44:25.009734 | orchestrator | 2025-10-08 15:44:25 | INFO  | Task 41436c97-5f8c-4e16-b6f7-e26cf48fe2a9 is in state STARTED 2025-10-08 15:44:25.009746 | orchestrator | 2025-10-08 15:44:25 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:44:28.065860 | orchestrator | 2025-10-08 15:44:28 | INFO  | Task fccaece7-f16a-4b63-909c-fd8412342af3 is in state STARTED 2025-10-08 15:44:28.068434 | orchestrator | 2025-10-08 15:44:28 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:44:28.070640 | orchestrator | 2025-10-08 15:44:28 | INFO  | Task 41436c97-5f8c-4e16-b6f7-e26cf48fe2a9 is in state STARTED 2025-10-08 15:44:28.071050 | orchestrator | 2025-10-08 15:44:28 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:44:31.101886 | orchestrator | 2025-10-08 15:44:31 | INFO  | Task fccaece7-f16a-4b63-909c-fd8412342af3 is in state STARTED 2025-10-08 15:44:31.102900 | orchestrator | 2025-10-08 15:44:31 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:44:31.104251 | orchestrator | 2025-10-08 15:44:31 | INFO  | Task 41436c97-5f8c-4e16-b6f7-e26cf48fe2a9 is in state STARTED 2025-10-08 15:44:31.104272 | orchestrator | 2025-10-08 15:44:31 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:44:34.138326 | orchestrator | 2025-10-08 15:44:34 | INFO  | Task fccaece7-f16a-4b63-909c-fd8412342af3 is in state STARTED 2025-10-08 15:44:34.140148 | orchestrator | 2025-10-08 15:44:34 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:44:34.143504 | orchestrator | 2025-10-08 15:44:34 | INFO  | Task 41436c97-5f8c-4e16-b6f7-e26cf48fe2a9 is in state STARTED 2025-10-08 15:44:34.143536 | orchestrator | 2025-10-08 15:44:34 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:44:37.176480 | orchestrator | 2025-10-08 15:44:37 | INFO  | Task fccaece7-f16a-4b63-909c-fd8412342af3 is in state STARTED 2025-10-08 15:44:37.176860 | orchestrator | 2025-10-08 15:44:37 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:44:37.177601 | orchestrator | 2025-10-08 15:44:37 | INFO  | Task 41436c97-5f8c-4e16-b6f7-e26cf48fe2a9 is in state STARTED 2025-10-08 15:44:37.177623 | orchestrator | 2025-10-08 15:44:37 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:44:40.212791 | orchestrator | 2025-10-08 15:44:40 | INFO  | Task fccaece7-f16a-4b63-909c-fd8412342af3 is in state STARTED 2025-10-08 15:44:40.212884 | orchestrator | 2025-10-08 15:44:40 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:44:40.213454 | orchestrator | 2025-10-08 15:44:40 | INFO  | Task 41436c97-5f8c-4e16-b6f7-e26cf48fe2a9 is in state STARTED 2025-10-08 15:44:40.213477 | orchestrator | 2025-10-08 15:44:40 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:44:43.255349 | orchestrator | 2025-10-08 15:44:43 | INFO  | Task fccaece7-f16a-4b63-909c-fd8412342af3 is in state STARTED 2025-10-08 15:44:43.257987 | orchestrator | 2025-10-08 15:44:43 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:44:43.264464 | orchestrator | 2025-10-08 15:44:43 | INFO  | Task 41436c97-5f8c-4e16-b6f7-e26cf48fe2a9 is in state SUCCESS 2025-10-08 15:44:43.264673 | orchestrator | 2025-10-08 15:44:43 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:44:43.266841 | orchestrator | 2025-10-08 15:44:43.266895 | orchestrator | 2025-10-08 15:44:43.266909 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-10-08 15:44:43.266971 | orchestrator | 2025-10-08 15:44:43.266984 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-10-08 15:44:43.266997 | orchestrator | Wednesday 08 October 2025 15:42:36 +0000 (0:00:00.146) 0:00:00.146 ***** 2025-10-08 15:44:43.267008 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:44:43.267020 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:44:43.267031 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:44:43.267042 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:44:43.267052 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:44:43.267075 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:44:43.267134 | orchestrator | 2025-10-08 15:44:43.267146 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-10-08 15:44:43.267157 | orchestrator | Wednesday 08 October 2025 15:42:36 +0000 (0:00:00.617) 0:00:00.764 ***** 2025-10-08 15:44:43.267168 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2025-10-08 15:44:43.267179 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2025-10-08 15:44:43.267190 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2025-10-08 15:44:43.267201 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2025-10-08 15:44:43.267212 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2025-10-08 15:44:43.267223 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2025-10-08 15:44:43.267234 | orchestrator | 2025-10-08 15:44:43.267244 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2025-10-08 15:44:43.267255 | orchestrator | 2025-10-08 15:44:43.267266 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2025-10-08 15:44:43.267277 | orchestrator | Wednesday 08 October 2025 15:42:37 +0000 (0:00:01.293) 0:00:02.057 ***** 2025-10-08 15:44:43.267290 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-08 15:44:43.267302 | orchestrator | 2025-10-08 15:44:43.267313 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2025-10-08 15:44:43.267324 | orchestrator | Wednesday 08 October 2025 15:42:40 +0000 (0:00:02.207) 0:00:04.264 ***** 2025-10-08 15:44:43.267338 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:44:43.267377 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:44:43.267390 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:44:43.267403 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:44:43.267415 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:44:43.267428 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:44:43.267440 | orchestrator | 2025-10-08 15:44:43.267466 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2025-10-08 15:44:43.267479 | orchestrator | Wednesday 08 October 2025 15:42:42 +0000 (0:00:02.092) 0:00:06.357 ***** 2025-10-08 15:44:43.267497 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:44:43.267510 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:44:43.267523 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:44:43.267535 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:44:43.267555 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:44:43.267567 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:44:43.267580 | orchestrator | 2025-10-08 15:44:43.267593 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2025-10-08 15:44:43.267605 | orchestrator | Wednesday 08 October 2025 15:42:43 +0000 (0:00:01.738) 0:00:08.096 ***** 2025-10-08 15:44:43.267618 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:44:43.267631 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:44:43.267660 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:44:43.267674 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:44:43.267686 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:44:43.267699 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:44:43.267717 | orchestrator | 2025-10-08 15:44:43.267800 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2025-10-08 15:44:43.267820 | orchestrator | Wednesday 08 October 2025 15:42:45 +0000 (0:00:01.564) 0:00:09.661 ***** 2025-10-08 15:44:43.267832 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:44:43.267844 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:44:43.267855 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:44:43.267866 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:44:43.267877 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:44:43.267889 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:44:43.267900 | orchestrator | 2025-10-08 15:44:43.267918 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2025-10-08 15:44:43.267929 | orchestrator | Wednesday 08 October 2025 15:42:47 +0000 (0:00:01.649) 0:00:11.310 ***** 2025-10-08 15:44:43.267945 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:44:43.267956 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:44:43.267975 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:44:43.267986 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:44:43.267997 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:44:43.268008 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:44:43.268020 | orchestrator | 2025-10-08 15:44:43.268031 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2025-10-08 15:44:43.268042 | orchestrator | Wednesday 08 October 2025 15:42:48 +0000 (0:00:01.458) 0:00:12.768 ***** 2025-10-08 15:44:43.268053 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:44:43.268064 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:44:43.268075 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:44:43.268085 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:44:43.268115 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:44:43.268126 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:44:43.268137 | orchestrator | 2025-10-08 15:44:43.268148 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2025-10-08 15:44:43.268159 | orchestrator | Wednesday 08 October 2025 15:42:51 +0000 (0:00:02.711) 0:00:15.479 ***** 2025-10-08 15:44:43.268169 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2025-10-08 15:44:43.268180 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2025-10-08 15:44:43.268191 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2025-10-08 15:44:43.268202 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2025-10-08 15:44:43.268212 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2025-10-08 15:44:43.268223 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2025-10-08 15:44:43.268234 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-10-08 15:44:43.268245 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-10-08 15:44:43.268262 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-10-08 15:44:43.268280 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-10-08 15:44:43.268291 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-10-08 15:44:43.268302 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-10-08 15:44:43.268318 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-10-08 15:44:43.268330 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-10-08 15:44:43.268341 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-10-08 15:44:43.268351 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-10-08 15:44:43.268362 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-10-08 15:44:43.268373 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-10-08 15:44:43.268384 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-10-08 15:44:43.268395 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-10-08 15:44:43.268406 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-10-08 15:44:43.268417 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-10-08 15:44:43.268428 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-10-08 15:44:43.268438 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-10-08 15:44:43.268449 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-10-08 15:44:43.268460 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-10-08 15:44:43.268471 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-10-08 15:44:43.268481 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-10-08 15:44:43.268492 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-10-08 15:44:43.268503 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-10-08 15:44:43.268514 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-10-08 15:44:43.268524 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-10-08 15:44:43.268535 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-10-08 15:44:43.268546 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-10-08 15:44:43.268557 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-10-08 15:44:43.268568 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-10-08 15:44:43.268578 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-10-08 15:44:43.268589 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-10-08 15:44:43.268606 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-10-08 15:44:43.268617 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-10-08 15:44:43.268628 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-10-08 15:44:43.268638 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-10-08 15:44:43.268649 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2025-10-08 15:44:43.268660 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2025-10-08 15:44:43.268676 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2025-10-08 15:44:43.268688 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2025-10-08 15:44:43.268698 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2025-10-08 15:44:43.268714 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2025-10-08 15:44:43.268725 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-10-08 15:44:43.268736 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-10-08 15:44:43.268747 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-10-08 15:44:43.268758 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-10-08 15:44:43.268769 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-10-08 15:44:43.268780 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-10-08 15:44:43.268791 | orchestrator | 2025-10-08 15:44:43.268802 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-10-08 15:44:43.268813 | orchestrator | Wednesday 08 October 2025 15:43:10 +0000 (0:00:19.128) 0:00:34.608 ***** 2025-10-08 15:44:43.268825 | orchestrator | 2025-10-08 15:44:43.268836 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-10-08 15:44:43.268847 | orchestrator | Wednesday 08 October 2025 15:43:10 +0000 (0:00:00.297) 0:00:34.906 ***** 2025-10-08 15:44:43.268858 | orchestrator | 2025-10-08 15:44:43.268868 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-10-08 15:44:43.268879 | orchestrator | Wednesday 08 October 2025 15:43:10 +0000 (0:00:00.075) 0:00:34.981 ***** 2025-10-08 15:44:43.268890 | orchestrator | 2025-10-08 15:44:43.268901 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-10-08 15:44:43.268911 | orchestrator | Wednesday 08 October 2025 15:43:10 +0000 (0:00:00.065) 0:00:35.046 ***** 2025-10-08 15:44:43.268922 | orchestrator | 2025-10-08 15:44:43.268933 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-10-08 15:44:43.268944 | orchestrator | Wednesday 08 October 2025 15:43:11 +0000 (0:00:00.132) 0:00:35.179 ***** 2025-10-08 15:44:43.268955 | orchestrator | 2025-10-08 15:44:43.268966 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-10-08 15:44:43.268977 | orchestrator | Wednesday 08 October 2025 15:43:11 +0000 (0:00:00.066) 0:00:35.245 ***** 2025-10-08 15:44:43.268988 | orchestrator | 2025-10-08 15:44:43.268999 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2025-10-08 15:44:43.269014 | orchestrator | Wednesday 08 October 2025 15:43:11 +0000 (0:00:00.074) 0:00:35.319 ***** 2025-10-08 15:44:43.269025 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:44:43.269036 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:44:43.269047 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:44:43.269058 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:44:43.269069 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:44:43.269080 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:44:43.269106 | orchestrator | 2025-10-08 15:44:43.269118 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2025-10-08 15:44:43.269128 | orchestrator | Wednesday 08 October 2025 15:43:13 +0000 (0:00:02.059) 0:00:37.379 ***** 2025-10-08 15:44:43.269139 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:44:43.269151 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:44:43.269161 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:44:43.269172 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:44:43.269183 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:44:43.269194 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:44:43.269204 | orchestrator | 2025-10-08 15:44:43.269215 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2025-10-08 15:44:43.269226 | orchestrator | 2025-10-08 15:44:43.269237 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-10-08 15:44:43.269248 | orchestrator | Wednesday 08 October 2025 15:43:21 +0000 (0:00:08.508) 0:00:45.887 ***** 2025-10-08 15:44:43.269259 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-08 15:44:43.269270 | orchestrator | 2025-10-08 15:44:43.269281 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-10-08 15:44:43.269292 | orchestrator | Wednesday 08 October 2025 15:43:22 +0000 (0:00:00.880) 0:00:46.768 ***** 2025-10-08 15:44:43.269303 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-08 15:44:43.269314 | orchestrator | 2025-10-08 15:44:43.269325 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2025-10-08 15:44:43.269335 | orchestrator | Wednesday 08 October 2025 15:43:23 +0000 (0:00:00.645) 0:00:47.414 ***** 2025-10-08 15:44:43.269346 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:44:43.269357 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:44:43.269368 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:44:43.269379 | orchestrator | 2025-10-08 15:44:43.269389 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2025-10-08 15:44:43.269400 | orchestrator | Wednesday 08 October 2025 15:43:24 +0000 (0:00:01.394) 0:00:48.809 ***** 2025-10-08 15:44:43.269411 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:44:43.269422 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:44:43.269433 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:44:43.269449 | orchestrator | 2025-10-08 15:44:43.269460 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2025-10-08 15:44:43.269471 | orchestrator | Wednesday 08 October 2025 15:43:25 +0000 (0:00:00.654) 0:00:49.463 ***** 2025-10-08 15:44:43.269482 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:44:43.269492 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:44:43.269503 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:44:43.269514 | orchestrator | 2025-10-08 15:44:43.269524 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2025-10-08 15:44:43.269540 | orchestrator | Wednesday 08 October 2025 15:43:25 +0000 (0:00:00.514) 0:00:49.977 ***** 2025-10-08 15:44:43.269551 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:44:43.269562 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:44:43.269572 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:44:43.269583 | orchestrator | 2025-10-08 15:44:43.269594 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2025-10-08 15:44:43.269604 | orchestrator | Wednesday 08 October 2025 15:43:26 +0000 (0:00:00.345) 0:00:50.323 ***** 2025-10-08 15:44:43.269621 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:44:43.269632 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:44:43.269643 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:44:43.269653 | orchestrator | 2025-10-08 15:44:43.269664 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2025-10-08 15:44:43.269675 | orchestrator | Wednesday 08 October 2025 15:43:26 +0000 (0:00:00.612) 0:00:50.935 ***** 2025-10-08 15:44:43.269685 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:44:43.269696 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:44:43.269707 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:44:43.269718 | orchestrator | 2025-10-08 15:44:43.269729 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2025-10-08 15:44:43.269739 | orchestrator | Wednesday 08 October 2025 15:43:27 +0000 (0:00:00.332) 0:00:51.267 ***** 2025-10-08 15:44:43.269750 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:44:43.269761 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:44:43.269771 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:44:43.269782 | orchestrator | 2025-10-08 15:44:43.269793 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2025-10-08 15:44:43.269804 | orchestrator | Wednesday 08 October 2025 15:43:27 +0000 (0:00:00.289) 0:00:51.557 ***** 2025-10-08 15:44:43.269815 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:44:43.269826 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:44:43.269837 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:44:43.269848 | orchestrator | 2025-10-08 15:44:43.269858 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2025-10-08 15:44:43.269869 | orchestrator | Wednesday 08 October 2025 15:43:27 +0000 (0:00:00.392) 0:00:51.949 ***** 2025-10-08 15:44:43.269880 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:44:43.269891 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:44:43.269901 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:44:43.269913 | orchestrator | 2025-10-08 15:44:43.269924 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2025-10-08 15:44:43.269934 | orchestrator | Wednesday 08 October 2025 15:43:28 +0000 (0:00:00.454) 0:00:52.404 ***** 2025-10-08 15:44:43.269945 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:44:43.269956 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:44:43.269966 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:44:43.269977 | orchestrator | 2025-10-08 15:44:43.269988 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2025-10-08 15:44:43.269999 | orchestrator | Wednesday 08 October 2025 15:43:28 +0000 (0:00:00.313) 0:00:52.717 ***** 2025-10-08 15:44:43.270009 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:44:43.270159 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:44:43.270172 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:44:43.270183 | orchestrator | 2025-10-08 15:44:43.270194 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2025-10-08 15:44:43.270205 | orchestrator | Wednesday 08 October 2025 15:43:28 +0000 (0:00:00.282) 0:00:53.000 ***** 2025-10-08 15:44:43.270216 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:44:43.270227 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:44:43.270237 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:44:43.270248 | orchestrator | 2025-10-08 15:44:43.270259 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2025-10-08 15:44:43.270270 | orchestrator | Wednesday 08 October 2025 15:43:29 +0000 (0:00:00.303) 0:00:53.303 ***** 2025-10-08 15:44:43.270281 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:44:43.270292 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:44:43.270303 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:44:43.270313 | orchestrator | 2025-10-08 15:44:43.270323 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2025-10-08 15:44:43.270332 | orchestrator | Wednesday 08 October 2025 15:43:29 +0000 (0:00:00.414) 0:00:53.718 ***** 2025-10-08 15:44:43.270342 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:44:43.270360 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:44:43.270369 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:44:43.270379 | orchestrator | 2025-10-08 15:44:43.270388 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2025-10-08 15:44:43.270398 | orchestrator | Wednesday 08 October 2025 15:43:29 +0000 (0:00:00.297) 0:00:54.015 ***** 2025-10-08 15:44:43.270408 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:44:43.270417 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:44:43.270427 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:44:43.270436 | orchestrator | 2025-10-08 15:44:43.270446 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2025-10-08 15:44:43.270456 | orchestrator | Wednesday 08 October 2025 15:43:30 +0000 (0:00:00.329) 0:00:54.344 ***** 2025-10-08 15:44:43.270465 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:44:43.270475 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:44:43.270484 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:44:43.270494 | orchestrator | 2025-10-08 15:44:43.270504 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2025-10-08 15:44:43.270513 | orchestrator | Wednesday 08 October 2025 15:43:30 +0000 (0:00:00.356) 0:00:54.701 ***** 2025-10-08 15:44:43.270523 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:44:43.270532 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:44:43.270549 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:44:43.270559 | orchestrator | 2025-10-08 15:44:43.270568 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-10-08 15:44:43.270578 | orchestrator | Wednesday 08 October 2025 15:43:30 +0000 (0:00:00.337) 0:00:55.038 ***** 2025-10-08 15:44:43.270588 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-08 15:44:43.270597 | orchestrator | 2025-10-08 15:44:43.270607 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2025-10-08 15:44:43.270621 | orchestrator | Wednesday 08 October 2025 15:43:31 +0000 (0:00:00.858) 0:00:55.897 ***** 2025-10-08 15:44:43.270631 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:44:43.270641 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:44:43.270651 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:44:43.270661 | orchestrator | 2025-10-08 15:44:43.270670 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2025-10-08 15:44:43.270680 | orchestrator | Wednesday 08 October 2025 15:43:32 +0000 (0:00:00.512) 0:00:56.409 ***** 2025-10-08 15:44:43.270690 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:44:43.270700 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:44:43.270709 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:44:43.270719 | orchestrator | 2025-10-08 15:44:43.270728 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2025-10-08 15:44:43.270738 | orchestrator | Wednesday 08 October 2025 15:43:32 +0000 (0:00:00.450) 0:00:56.860 ***** 2025-10-08 15:44:43.270748 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:44:43.270758 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:44:43.270767 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:44:43.270777 | orchestrator | 2025-10-08 15:44:43.270787 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2025-10-08 15:44:43.270797 | orchestrator | Wednesday 08 October 2025 15:43:33 +0000 (0:00:00.599) 0:00:57.459 ***** 2025-10-08 15:44:43.270806 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:44:43.270816 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:44:43.270826 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:44:43.270835 | orchestrator | 2025-10-08 15:44:43.270845 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2025-10-08 15:44:43.270855 | orchestrator | Wednesday 08 October 2025 15:43:33 +0000 (0:00:00.387) 0:00:57.847 ***** 2025-10-08 15:44:43.270865 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:44:43.270874 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:44:43.270890 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:44:43.270900 | orchestrator | 2025-10-08 15:44:43.270909 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2025-10-08 15:44:43.270919 | orchestrator | Wednesday 08 October 2025 15:43:34 +0000 (0:00:00.353) 0:00:58.201 ***** 2025-10-08 15:44:43.270929 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:44:43.270938 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:44:43.270948 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:44:43.270957 | orchestrator | 2025-10-08 15:44:43.270967 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2025-10-08 15:44:43.270977 | orchestrator | Wednesday 08 October 2025 15:43:34 +0000 (0:00:00.379) 0:00:58.580 ***** 2025-10-08 15:44:43.270986 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:44:43.270996 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:44:43.271006 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:44:43.271015 | orchestrator | 2025-10-08 15:44:43.271025 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2025-10-08 15:44:43.271035 | orchestrator | Wednesday 08 October 2025 15:43:35 +0000 (0:00:00.664) 0:00:59.245 ***** 2025-10-08 15:44:43.271044 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:44:43.271054 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:44:43.271064 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:44:43.271073 | orchestrator | 2025-10-08 15:44:43.271083 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-10-08 15:44:43.271134 | orchestrator | Wednesday 08 October 2025 15:43:35 +0000 (0:00:00.460) 0:00:59.705 ***** 2025-10-08 15:44:43.271146 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:44:43.271158 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:44:43.271168 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:44:43.271185 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:44:43.271201 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:44:43.271212 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:44:43.271233 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:44:43.271244 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:44:43.271254 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:44:43.271264 | orchestrator | 2025-10-08 15:44:43.271274 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-10-08 15:44:43.271284 | orchestrator | Wednesday 08 October 2025 15:43:37 +0000 (0:00:01.613) 0:01:01.319 ***** 2025-10-08 15:44:43.271294 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:44:43.271304 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:44:43.271314 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:44:43.271324 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:44:43.271340 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:44:43.271354 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:44:43.271373 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:44:43.271384 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:44:43.271394 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:44:43.271404 | orchestrator | 2025-10-08 15:44:43.271414 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-10-08 15:44:43.271424 | orchestrator | Wednesday 08 October 2025 15:43:41 +0000 (0:00:04.086) 0:01:05.405 ***** 2025-10-08 15:44:43.271434 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:44:43.271444 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:44:43.271454 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:44:43.271462 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:44:43.271470 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:44:43.271484 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:44:43.271496 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:44:43.271509 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:44:43.271518 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:44:43.271526 | orchestrator | 2025-10-08 15:44:43.271534 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-10-08 15:44:43.271542 | orchestrator | Wednesday 08 October 2025 15:43:43 +0000 (0:00:02.373) 0:01:07.779 ***** 2025-10-08 15:44:43.271550 | orchestrator | 2025-10-08 15:44:43.271558 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-10-08 15:44:43.271566 | orchestrator | Wednesday 08 October 2025 15:43:43 +0000 (0:00:00.071) 0:01:07.850 ***** 2025-10-08 15:44:43.271574 | orchestrator | 2025-10-08 15:44:43.271582 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-10-08 15:44:43.271589 | orchestrator | Wednesday 08 October 2025 15:43:43 +0000 (0:00:00.063) 0:01:07.914 ***** 2025-10-08 15:44:43.271597 | orchestrator | 2025-10-08 15:44:43.271605 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-10-08 15:44:43.271613 | orchestrator | Wednesday 08 October 2025 15:43:43 +0000 (0:00:00.073) 0:01:07.987 ***** 2025-10-08 15:44:43.271621 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:44:43.271629 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:44:43.271637 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:44:43.271645 | orchestrator | 2025-10-08 15:44:43.271652 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-10-08 15:44:43.271660 | orchestrator | Wednesday 08 October 2025 15:43:51 +0000 (0:00:07.675) 0:01:15.663 ***** 2025-10-08 15:44:43.271668 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:44:43.271676 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:44:43.271684 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:44:43.271692 | orchestrator | 2025-10-08 15:44:43.271700 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-10-08 15:44:43.271708 | orchestrator | Wednesday 08 October 2025 15:43:54 +0000 (0:00:02.441) 0:01:18.104 ***** 2025-10-08 15:44:43.271716 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:44:43.271724 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:44:43.271731 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:44:43.271739 | orchestrator | 2025-10-08 15:44:43.271747 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-10-08 15:44:43.271755 | orchestrator | Wednesday 08 October 2025 15:44:01 +0000 (0:00:07.579) 0:01:25.683 ***** 2025-10-08 15:44:43.271763 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:44:43.271771 | orchestrator | 2025-10-08 15:44:43.271778 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-10-08 15:44:43.271786 | orchestrator | Wednesday 08 October 2025 15:44:02 +0000 (0:00:00.492) 0:01:26.176 ***** 2025-10-08 15:44:43.271794 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:44:43.271802 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:44:43.271810 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:44:43.271822 | orchestrator | 2025-10-08 15:44:43.271830 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-10-08 15:44:43.271838 | orchestrator | Wednesday 08 October 2025 15:44:03 +0000 (0:00:01.081) 0:01:27.257 ***** 2025-10-08 15:44:43.271846 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:44:43.271854 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:44:43.271861 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:44:43.271869 | orchestrator | 2025-10-08 15:44:43.271877 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-10-08 15:44:43.271885 | orchestrator | Wednesday 08 October 2025 15:44:03 +0000 (0:00:00.746) 0:01:28.004 ***** 2025-10-08 15:44:43.271893 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:44:43.271900 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:44:43.271908 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:44:43.271916 | orchestrator | 2025-10-08 15:44:43.271924 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-10-08 15:44:43.271932 | orchestrator | Wednesday 08 October 2025 15:44:04 +0000 (0:00:00.969) 0:01:28.973 ***** 2025-10-08 15:44:43.271940 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:44:43.271948 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:44:43.271956 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:44:43.271963 | orchestrator | 2025-10-08 15:44:43.271971 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-10-08 15:44:43.271979 | orchestrator | Wednesday 08 October 2025 15:44:05 +0000 (0:00:00.590) 0:01:29.564 ***** 2025-10-08 15:44:43.271987 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:44:43.271995 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:44:43.272007 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:44:43.272015 | orchestrator | 2025-10-08 15:44:43.272023 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-10-08 15:44:43.272031 | orchestrator | Wednesday 08 October 2025 15:44:06 +0000 (0:00:01.530) 0:01:31.094 ***** 2025-10-08 15:44:43.272039 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:44:43.272047 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:44:43.272055 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:44:43.272062 | orchestrator | 2025-10-08 15:44:43.272070 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2025-10-08 15:44:43.272082 | orchestrator | Wednesday 08 October 2025 15:44:07 +0000 (0:00:00.826) 0:01:31.921 ***** 2025-10-08 15:44:43.272105 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:44:43.272114 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:44:43.272122 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:44:43.272130 | orchestrator | 2025-10-08 15:44:43.272138 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-10-08 15:44:43.272146 | orchestrator | Wednesday 08 October 2025 15:44:08 +0000 (0:00:00.419) 0:01:32.340 ***** 2025-10-08 15:44:43.272154 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:44:43.272162 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:44:43.272171 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:44:43.272179 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:44:43.272193 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:44:43.272202 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:44:43.272210 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:44:43.272218 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:44:43.272232 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:44:43.272241 | orchestrator | 2025-10-08 15:44:43.272249 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-10-08 15:44:43.272257 | orchestrator | Wednesday 08 October 2025 15:44:09 +0000 (0:00:01.458) 0:01:33.799 ***** 2025-10-08 15:44:43.272269 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:44:43.272277 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:44:43.272286 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:44:43.272294 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:44:43.272307 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:44:43.272315 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:44:43.272324 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:44:43.272332 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:44:43.272340 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:44:43.272349 | orchestrator | 2025-10-08 15:44:43.272356 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-10-08 15:44:43.272364 | orchestrator | Wednesday 08 October 2025 15:44:13 +0000 (0:00:03.918) 0:01:37.717 ***** 2025-10-08 15:44:43.272378 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:44:43.272390 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:44:43.272399 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:44:43.272407 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:44:43.272420 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:44:43.272429 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:44:43.272437 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:44:43.272445 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:44:43.272453 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:44:43.272461 | orchestrator | 2025-10-08 15:44:43.272469 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-10-08 15:44:43.272477 | orchestrator | Wednesday 08 October 2025 15:44:16 +0000 (0:00:02.584) 0:01:40.301 ***** 2025-10-08 15:44:43.272485 | orchestrator | 2025-10-08 15:44:43.272493 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-10-08 15:44:43.272501 | orchestrator | Wednesday 08 October 2025 15:44:16 +0000 (0:00:00.074) 0:01:40.376 ***** 2025-10-08 15:44:43.272509 | orchestrator | 2025-10-08 15:44:43.272516 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-10-08 15:44:43.272524 | orchestrator | Wednesday 08 October 2025 15:44:16 +0000 (0:00:00.070) 0:01:40.446 ***** 2025-10-08 15:44:43.272532 | orchestrator | 2025-10-08 15:44:43.272540 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-10-08 15:44:43.272547 | orchestrator | Wednesday 08 October 2025 15:44:16 +0000 (0:00:00.069) 0:01:40.516 ***** 2025-10-08 15:44:43.272555 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:44:43.272563 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:44:43.272571 | orchestrator | 2025-10-08 15:44:43.272583 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-10-08 15:44:43.272591 | orchestrator | Wednesday 08 October 2025 15:44:22 +0000 (0:00:06.219) 0:01:46.735 ***** 2025-10-08 15:44:43.272599 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:44:43.272607 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:44:43.272615 | orchestrator | 2025-10-08 15:44:43.272623 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-10-08 15:44:43.272631 | orchestrator | Wednesday 08 October 2025 15:44:29 +0000 (0:00:06.381) 0:01:53.117 ***** 2025-10-08 15:44:43.272644 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:44:43.272656 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:44:43.272664 | orchestrator | 2025-10-08 15:44:43.272672 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-10-08 15:44:43.272680 | orchestrator | Wednesday 08 October 2025 15:44:35 +0000 (0:00:06.953) 0:02:00.070 ***** 2025-10-08 15:44:43.272688 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:44:43.272696 | orchestrator | 2025-10-08 15:44:43.272704 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-10-08 15:44:43.272712 | orchestrator | Wednesday 08 October 2025 15:44:36 +0000 (0:00:00.168) 0:02:00.239 ***** 2025-10-08 15:44:43.272720 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:44:43.272728 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:44:43.272736 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:44:43.272743 | orchestrator | 2025-10-08 15:44:43.272751 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-10-08 15:44:43.272759 | orchestrator | Wednesday 08 October 2025 15:44:36 +0000 (0:00:00.776) 0:02:01.015 ***** 2025-10-08 15:44:43.272767 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:44:43.272775 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:44:43.272783 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:44:43.272791 | orchestrator | 2025-10-08 15:44:43.272799 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-10-08 15:44:43.272807 | orchestrator | Wednesday 08 October 2025 15:44:37 +0000 (0:00:00.628) 0:02:01.643 ***** 2025-10-08 15:44:43.272815 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:44:43.272823 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:44:43.272830 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:44:43.272838 | orchestrator | 2025-10-08 15:44:43.272846 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-10-08 15:44:43.272854 | orchestrator | Wednesday 08 October 2025 15:44:38 +0000 (0:00:01.055) 0:02:02.698 ***** 2025-10-08 15:44:43.272862 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:44:43.272870 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:44:43.272878 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:44:43.272885 | orchestrator | 2025-10-08 15:44:43.272893 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-10-08 15:44:43.272901 | orchestrator | Wednesday 08 October 2025 15:44:39 +0000 (0:00:00.586) 0:02:03.285 ***** 2025-10-08 15:44:43.272909 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:44:43.272917 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:44:43.272925 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:44:43.272933 | orchestrator | 2025-10-08 15:44:43.272941 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-10-08 15:44:43.272948 | orchestrator | Wednesday 08 October 2025 15:44:39 +0000 (0:00:00.783) 0:02:04.069 ***** 2025-10-08 15:44:43.272956 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:44:43.272964 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:44:43.272972 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:44:43.272980 | orchestrator | 2025-10-08 15:44:43.272987 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-08 15:44:43.272995 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-10-08 15:44:43.273004 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-10-08 15:44:43.273012 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-10-08 15:44:43.273020 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-08 15:44:43.273028 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-08 15:44:43.273041 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-08 15:44:43.273049 | orchestrator | 2025-10-08 15:44:43.273057 | orchestrator | 2025-10-08 15:44:43.273065 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-08 15:44:43.273073 | orchestrator | Wednesday 08 October 2025 15:44:40 +0000 (0:00:01.003) 0:02:05.072 ***** 2025-10-08 15:44:43.273081 | orchestrator | =============================================================================== 2025-10-08 15:44:43.273100 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 19.13s 2025-10-08 15:44:43.273109 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 14.53s 2025-10-08 15:44:43.273117 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 13.90s 2025-10-08 15:44:43.273124 | orchestrator | ovn-db : Restart ovn-sb-db container ------------------------------------ 8.82s 2025-10-08 15:44:43.273132 | orchestrator | ovn-controller : Restart ovn-controller container ----------------------- 8.51s 2025-10-08 15:44:43.273140 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.09s 2025-10-08 15:44:43.273148 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.92s 2025-10-08 15:44:43.273160 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.71s 2025-10-08 15:44:43.273168 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.58s 2025-10-08 15:44:43.273176 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.37s 2025-10-08 15:44:43.273184 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 2.21s 2025-10-08 15:44:43.273191 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 2.09s 2025-10-08 15:44:43.273206 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 2.06s 2025-10-08 15:44:43.273214 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.74s 2025-10-08 15:44:43.273222 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.65s 2025-10-08 15:44:43.273230 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.61s 2025-10-08 15:44:43.273237 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.57s 2025-10-08 15:44:43.273245 | orchestrator | ovn-db : Wait for ovn-nb-db --------------------------------------------- 1.53s 2025-10-08 15:44:43.273253 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.46s 2025-10-08 15:44:43.273261 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.46s 2025-10-08 15:44:46.324834 | orchestrator | 2025-10-08 15:44:46 | INFO  | Task fccaece7-f16a-4b63-909c-fd8412342af3 is in state STARTED 2025-10-08 15:44:46.328473 | orchestrator | 2025-10-08 15:44:46 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:44:46.328896 | orchestrator | 2025-10-08 15:44:46 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:44:49.376553 | orchestrator | 2025-10-08 15:44:49 | INFO  | Task fccaece7-f16a-4b63-909c-fd8412342af3 is in state STARTED 2025-10-08 15:44:49.378600 | orchestrator | 2025-10-08 15:44:49 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:44:49.378860 | orchestrator | 2025-10-08 15:44:49 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:44:52.432508 | orchestrator | 2025-10-08 15:44:52 | INFO  | Task fccaece7-f16a-4b63-909c-fd8412342af3 is in state STARTED 2025-10-08 15:44:52.434250 | orchestrator | 2025-10-08 15:44:52 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:44:52.434275 | orchestrator | 2025-10-08 15:44:52 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:44:55.477921 | orchestrator | 2025-10-08 15:44:55 | INFO  | Task fccaece7-f16a-4b63-909c-fd8412342af3 is in state STARTED 2025-10-08 15:44:55.480263 | orchestrator | 2025-10-08 15:44:55 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:44:55.480295 | orchestrator | 2025-10-08 15:44:55 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:44:58.527145 | orchestrator | 2025-10-08 15:44:58 | INFO  | Task fccaece7-f16a-4b63-909c-fd8412342af3 is in state STARTED 2025-10-08 15:44:58.529612 | orchestrator | 2025-10-08 15:44:58 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:44:58.529892 | orchestrator | 2025-10-08 15:44:58 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:45:01.573204 | orchestrator | 2025-10-08 15:45:01 | INFO  | Task fccaece7-f16a-4b63-909c-fd8412342af3 is in state STARTED 2025-10-08 15:45:01.575037 | orchestrator | 2025-10-08 15:45:01 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:45:01.575074 | orchestrator | 2025-10-08 15:45:01 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:45:04.615748 | orchestrator | 2025-10-08 15:45:04 | INFO  | Task fccaece7-f16a-4b63-909c-fd8412342af3 is in state STARTED 2025-10-08 15:45:04.619715 | orchestrator | 2025-10-08 15:45:04 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:45:04.619742 | orchestrator | 2025-10-08 15:45:04 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:45:07.677363 | orchestrator | 2025-10-08 15:45:07 | INFO  | Task fccaece7-f16a-4b63-909c-fd8412342af3 is in state STARTED 2025-10-08 15:45:07.678186 | orchestrator | 2025-10-08 15:45:07 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:45:07.678219 | orchestrator | 2025-10-08 15:45:07 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:45:10.730309 | orchestrator | 2025-10-08 15:45:10 | INFO  | Task fccaece7-f16a-4b63-909c-fd8412342af3 is in state STARTED 2025-10-08 15:45:10.732610 | orchestrator | 2025-10-08 15:45:10 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:45:10.732714 | orchestrator | 2025-10-08 15:45:10 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:45:13.770390 | orchestrator | 2025-10-08 15:45:13 | INFO  | Task fccaece7-f16a-4b63-909c-fd8412342af3 is in state STARTED 2025-10-08 15:45:13.772033 | orchestrator | 2025-10-08 15:45:13 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:45:13.772070 | orchestrator | 2025-10-08 15:45:13 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:45:16.820679 | orchestrator | 2025-10-08 15:45:16 | INFO  | Task fccaece7-f16a-4b63-909c-fd8412342af3 is in state STARTED 2025-10-08 15:45:16.821789 | orchestrator | 2025-10-08 15:45:16 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:45:16.821818 | orchestrator | 2025-10-08 15:45:16 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:45:19.868005 | orchestrator | 2025-10-08 15:45:19 | INFO  | Task fccaece7-f16a-4b63-909c-fd8412342af3 is in state STARTED 2025-10-08 15:45:19.869599 | orchestrator | 2025-10-08 15:45:19 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:45:19.869625 | orchestrator | 2025-10-08 15:45:19 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:45:22.908541 | orchestrator | 2025-10-08 15:45:22 | INFO  | Task fccaece7-f16a-4b63-909c-fd8412342af3 is in state STARTED 2025-10-08 15:45:22.909816 | orchestrator | 2025-10-08 15:45:22 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:45:22.910068 | orchestrator | 2025-10-08 15:45:22 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:45:25.944185 | orchestrator | 2025-10-08 15:45:25 | INFO  | Task fccaece7-f16a-4b63-909c-fd8412342af3 is in state STARTED 2025-10-08 15:45:25.946248 | orchestrator | 2025-10-08 15:45:25 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:45:25.946316 | orchestrator | 2025-10-08 15:45:25 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:45:28.981410 | orchestrator | 2025-10-08 15:45:28 | INFO  | Task fccaece7-f16a-4b63-909c-fd8412342af3 is in state STARTED 2025-10-08 15:45:28.981774 | orchestrator | 2025-10-08 15:45:28 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:45:28.981806 | orchestrator | 2025-10-08 15:45:28 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:45:32.025353 | orchestrator | 2025-10-08 15:45:32 | INFO  | Task fccaece7-f16a-4b63-909c-fd8412342af3 is in state STARTED 2025-10-08 15:45:32.025739 | orchestrator | 2025-10-08 15:45:32 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:45:32.025762 | orchestrator | 2025-10-08 15:45:32 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:45:35.062617 | orchestrator | 2025-10-08 15:45:35 | INFO  | Task fccaece7-f16a-4b63-909c-fd8412342af3 is in state STARTED 2025-10-08 15:45:35.062712 | orchestrator | 2025-10-08 15:45:35 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:45:35.062725 | orchestrator | 2025-10-08 15:45:35 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:45:38.089429 | orchestrator | 2025-10-08 15:45:38 | INFO  | Task fccaece7-f16a-4b63-909c-fd8412342af3 is in state STARTED 2025-10-08 15:45:38.091673 | orchestrator | 2025-10-08 15:45:38 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:45:38.091718 | orchestrator | 2025-10-08 15:45:38 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:45:41.132524 | orchestrator | 2025-10-08 15:45:41 | INFO  | Task fccaece7-f16a-4b63-909c-fd8412342af3 is in state STARTED 2025-10-08 15:45:41.133371 | orchestrator | 2025-10-08 15:45:41 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:45:41.134083 | orchestrator | 2025-10-08 15:45:41 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:45:44.173950 | orchestrator | 2025-10-08 15:45:44 | INFO  | Task fccaece7-f16a-4b63-909c-fd8412342af3 is in state STARTED 2025-10-08 15:45:44.176295 | orchestrator | 2025-10-08 15:45:44 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:45:44.176790 | orchestrator | 2025-10-08 15:45:44 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:45:47.214672 | orchestrator | 2025-10-08 15:45:47 | INFO  | Task fccaece7-f16a-4b63-909c-fd8412342af3 is in state STARTED 2025-10-08 15:45:47.215601 | orchestrator | 2025-10-08 15:45:47 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:45:47.215753 | orchestrator | 2025-10-08 15:45:47 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:45:50.259297 | orchestrator | 2025-10-08 15:45:50 | INFO  | Task fccaece7-f16a-4b63-909c-fd8412342af3 is in state STARTED 2025-10-08 15:45:50.264306 | orchestrator | 2025-10-08 15:45:50 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:45:50.266241 | orchestrator | 2025-10-08 15:45:50 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:45:53.307154 | orchestrator | 2025-10-08 15:45:53 | INFO  | Task fccaece7-f16a-4b63-909c-fd8412342af3 is in state STARTED 2025-10-08 15:45:53.310976 | orchestrator | 2025-10-08 15:45:53 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:45:53.311526 | orchestrator | 2025-10-08 15:45:53 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:45:56.361052 | orchestrator | 2025-10-08 15:45:56 | INFO  | Task fccaece7-f16a-4b63-909c-fd8412342af3 is in state STARTED 2025-10-08 15:45:56.361986 | orchestrator | 2025-10-08 15:45:56 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:45:56.362071 | orchestrator | 2025-10-08 15:45:56 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:45:59.407736 | orchestrator | 2025-10-08 15:45:59 | INFO  | Task fccaece7-f16a-4b63-909c-fd8412342af3 is in state STARTED 2025-10-08 15:45:59.408260 | orchestrator | 2025-10-08 15:45:59 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:45:59.408292 | orchestrator | 2025-10-08 15:45:59 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:46:02.463214 | orchestrator | 2025-10-08 15:46:02 | INFO  | Task fccaece7-f16a-4b63-909c-fd8412342af3 is in state STARTED 2025-10-08 15:46:02.465429 | orchestrator | 2025-10-08 15:46:02 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:46:02.465981 | orchestrator | 2025-10-08 15:46:02 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:46:05.504295 | orchestrator | 2025-10-08 15:46:05 | INFO  | Task fccaece7-f16a-4b63-909c-fd8412342af3 is in state STARTED 2025-10-08 15:46:05.506087 | orchestrator | 2025-10-08 15:46:05 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:46:05.506142 | orchestrator | 2025-10-08 15:46:05 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:46:08.561428 | orchestrator | 2025-10-08 15:46:08 | INFO  | Task fccaece7-f16a-4b63-909c-fd8412342af3 is in state STARTED 2025-10-08 15:46:08.561889 | orchestrator | 2025-10-08 15:46:08 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:46:08.561928 | orchestrator | 2025-10-08 15:46:08 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:46:11.610291 | orchestrator | 2025-10-08 15:46:11 | INFO  | Task fccaece7-f16a-4b63-909c-fd8412342af3 is in state STARTED 2025-10-08 15:46:11.610788 | orchestrator | 2025-10-08 15:46:11 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:46:11.610817 | orchestrator | 2025-10-08 15:46:11 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:46:14.659609 | orchestrator | 2025-10-08 15:46:14 | INFO  | Task fccaece7-f16a-4b63-909c-fd8412342af3 is in state STARTED 2025-10-08 15:46:14.663980 | orchestrator | 2025-10-08 15:46:14 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:46:14.664051 | orchestrator | 2025-10-08 15:46:14 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:46:17.709816 | orchestrator | 2025-10-08 15:46:17 | INFO  | Task fccaece7-f16a-4b63-909c-fd8412342af3 is in state STARTED 2025-10-08 15:46:17.712268 | orchestrator | 2025-10-08 15:46:17 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:46:17.712606 | orchestrator | 2025-10-08 15:46:17 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:46:20.753253 | orchestrator | 2025-10-08 15:46:20 | INFO  | Task fccaece7-f16a-4b63-909c-fd8412342af3 is in state STARTED 2025-10-08 15:46:20.753472 | orchestrator | 2025-10-08 15:46:20 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:46:20.753580 | orchestrator | 2025-10-08 15:46:20 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:46:23.800825 | orchestrator | 2025-10-08 15:46:23 | INFO  | Task fccaece7-f16a-4b63-909c-fd8412342af3 is in state STARTED 2025-10-08 15:46:23.800947 | orchestrator | 2025-10-08 15:46:23 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:46:23.800963 | orchestrator | 2025-10-08 15:46:23 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:46:26.850810 | orchestrator | 2025-10-08 15:46:26 | INFO  | Task fccaece7-f16a-4b63-909c-fd8412342af3 is in state STARTED 2025-10-08 15:46:26.851999 | orchestrator | 2025-10-08 15:46:26 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:46:26.852042 | orchestrator | 2025-10-08 15:46:26 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:46:29.911577 | orchestrator | 2025-10-08 15:46:29 | INFO  | Task fccaece7-f16a-4b63-909c-fd8412342af3 is in state STARTED 2025-10-08 15:46:29.912959 | orchestrator | 2025-10-08 15:46:29 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:46:29.913314 | orchestrator | 2025-10-08 15:46:29 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:46:32.962611 | orchestrator | 2025-10-08 15:46:32 | INFO  | Task fccaece7-f16a-4b63-909c-fd8412342af3 is in state STARTED 2025-10-08 15:46:32.964210 | orchestrator | 2025-10-08 15:46:32 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:46:32.964630 | orchestrator | 2025-10-08 15:46:32 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:46:36.026660 | orchestrator | 2025-10-08 15:46:36 | INFO  | Task fccaece7-f16a-4b63-909c-fd8412342af3 is in state STARTED 2025-10-08 15:46:36.027654 | orchestrator | 2025-10-08 15:46:36 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:46:36.027886 | orchestrator | 2025-10-08 15:46:36 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:46:39.072787 | orchestrator | 2025-10-08 15:46:39 | INFO  | Task fccaece7-f16a-4b63-909c-fd8412342af3 is in state STARTED 2025-10-08 15:46:39.072880 | orchestrator | 2025-10-08 15:46:39 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:46:39.072894 | orchestrator | 2025-10-08 15:46:39 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:46:42.127607 | orchestrator | 2025-10-08 15:46:42 | INFO  | Task fccaece7-f16a-4b63-909c-fd8412342af3 is in state STARTED 2025-10-08 15:46:42.128255 | orchestrator | 2025-10-08 15:46:42 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:46:42.128281 | orchestrator | 2025-10-08 15:46:42 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:46:45.166765 | orchestrator | 2025-10-08 15:46:45 | INFO  | Task fccaece7-f16a-4b63-909c-fd8412342af3 is in state STARTED 2025-10-08 15:46:45.167840 | orchestrator | 2025-10-08 15:46:45 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:46:45.167868 | orchestrator | 2025-10-08 15:46:45 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:46:48.211925 | orchestrator | 2025-10-08 15:46:48 | INFO  | Task fccaece7-f16a-4b63-909c-fd8412342af3 is in state STARTED 2025-10-08 15:46:48.212882 | orchestrator | 2025-10-08 15:46:48 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:46:48.212915 | orchestrator | 2025-10-08 15:46:48 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:46:51.262702 | orchestrator | 2025-10-08 15:46:51 | INFO  | Task fccaece7-f16a-4b63-909c-fd8412342af3 is in state STARTED 2025-10-08 15:46:51.265005 | orchestrator | 2025-10-08 15:46:51 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:46:51.265066 | orchestrator | 2025-10-08 15:46:51 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:46:54.323221 | orchestrator | 2025-10-08 15:46:54 | INFO  | Task fccaece7-f16a-4b63-909c-fd8412342af3 is in state STARTED 2025-10-08 15:46:54.324947 | orchestrator | 2025-10-08 15:46:54 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:46:54.325279 | orchestrator | 2025-10-08 15:46:54 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:46:57.369951 | orchestrator | 2025-10-08 15:46:57 | INFO  | Task fccaece7-f16a-4b63-909c-fd8412342af3 is in state STARTED 2025-10-08 15:46:57.372293 | orchestrator | 2025-10-08 15:46:57 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:46:57.372318 | orchestrator | 2025-10-08 15:46:57 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:47:00.436500 | orchestrator | 2025-10-08 15:47:00 | INFO  | Task fccaece7-f16a-4b63-909c-fd8412342af3 is in state STARTED 2025-10-08 15:47:00.436568 | orchestrator | 2025-10-08 15:47:00 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:47:00.436581 | orchestrator | 2025-10-08 15:47:00 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:47:03.469522 | orchestrator | 2025-10-08 15:47:03 | INFO  | Task fccaece7-f16a-4b63-909c-fd8412342af3 is in state STARTED 2025-10-08 15:47:03.471283 | orchestrator | 2025-10-08 15:47:03 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:47:03.471311 | orchestrator | 2025-10-08 15:47:03 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:47:06.508751 | orchestrator | 2025-10-08 15:47:06 | INFO  | Task fccaece7-f16a-4b63-909c-fd8412342af3 is in state STARTED 2025-10-08 15:47:06.509872 | orchestrator | 2025-10-08 15:47:06 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:47:06.509899 | orchestrator | 2025-10-08 15:47:06 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:47:09.557545 | orchestrator | 2025-10-08 15:47:09 | INFO  | Task fccaece7-f16a-4b63-909c-fd8412342af3 is in state STARTED 2025-10-08 15:47:09.558570 | orchestrator | 2025-10-08 15:47:09 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:47:09.559412 | orchestrator | 2025-10-08 15:47:09 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:47:12.603912 | orchestrator | 2025-10-08 15:47:12 | INFO  | Task fccaece7-f16a-4b63-909c-fd8412342af3 is in state STARTED 2025-10-08 15:47:12.605258 | orchestrator | 2025-10-08 15:47:12 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:47:12.605294 | orchestrator | 2025-10-08 15:47:12 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:47:15.644747 | orchestrator | 2025-10-08 15:47:15 | INFO  | Task fccaece7-f16a-4b63-909c-fd8412342af3 is in state STARTED 2025-10-08 15:47:15.646493 | orchestrator | 2025-10-08 15:47:15 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:47:15.646572 | orchestrator | 2025-10-08 15:47:15 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:47:18.685817 | orchestrator | 2025-10-08 15:47:18 | INFO  | Task fccaece7-f16a-4b63-909c-fd8412342af3 is in state STARTED 2025-10-08 15:47:18.687943 | orchestrator | 2025-10-08 15:47:18 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:47:18.687975 | orchestrator | 2025-10-08 15:47:18 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:47:21.737515 | orchestrator | 2025-10-08 15:47:21 | INFO  | Task fccaece7-f16a-4b63-909c-fd8412342af3 is in state STARTED 2025-10-08 15:47:21.738204 | orchestrator | 2025-10-08 15:47:21 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:47:21.738295 | orchestrator | 2025-10-08 15:47:21 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:47:24.799580 | orchestrator | 2025-10-08 15:47:24 | INFO  | Task fccaece7-f16a-4b63-909c-fd8412342af3 is in state STARTED 2025-10-08 15:47:24.800813 | orchestrator | 2025-10-08 15:47:24 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:47:24.800838 | orchestrator | 2025-10-08 15:47:24 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:47:27.854057 | orchestrator | 2025-10-08 15:47:27 | INFO  | Task fccaece7-f16a-4b63-909c-fd8412342af3 is in state STARTED 2025-10-08 15:47:27.855695 | orchestrator | 2025-10-08 15:47:27 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:47:27.855721 | orchestrator | 2025-10-08 15:47:27 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:47:30.906734 | orchestrator | 2025-10-08 15:47:30 | INFO  | Task fccaece7-f16a-4b63-909c-fd8412342af3 is in state STARTED 2025-10-08 15:47:30.906818 | orchestrator | 2025-10-08 15:47:30 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:47:30.906832 | orchestrator | 2025-10-08 15:47:30 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:47:33.945380 | orchestrator | 2025-10-08 15:47:33 | INFO  | Task fccaece7-f16a-4b63-909c-fd8412342af3 is in state STARTED 2025-10-08 15:47:33.947537 | orchestrator | 2025-10-08 15:47:33 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:47:33.947570 | orchestrator | 2025-10-08 15:47:33 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:47:37.000966 | orchestrator | 2025-10-08 15:47:36 | INFO  | Task fccaece7-f16a-4b63-909c-fd8412342af3 is in state STARTED 2025-10-08 15:47:37.003265 | orchestrator | 2025-10-08 15:47:37 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:47:37.003301 | orchestrator | 2025-10-08 15:47:37 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:47:40.055516 | orchestrator | 2025-10-08 15:47:40 | INFO  | Task fccaece7-f16a-4b63-909c-fd8412342af3 is in state STARTED 2025-10-08 15:47:40.058106 | orchestrator | 2025-10-08 15:47:40 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:47:40.058197 | orchestrator | 2025-10-08 15:47:40 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:47:43.116945 | orchestrator | 2025-10-08 15:47:43 | INFO  | Task fccaece7-f16a-4b63-909c-fd8412342af3 is in state STARTED 2025-10-08 15:47:43.117993 | orchestrator | 2025-10-08 15:47:43 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:47:43.118078 | orchestrator | 2025-10-08 15:47:43 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:47:46.150226 | orchestrator | 2025-10-08 15:47:46 | INFO  | Task fccaece7-f16a-4b63-909c-fd8412342af3 is in state STARTED 2025-10-08 15:47:46.153227 | orchestrator | 2025-10-08 15:47:46 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:47:46.153318 | orchestrator | 2025-10-08 15:47:46 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:47:49.189287 | orchestrator | 2025-10-08 15:47:49 | INFO  | Task fccaece7-f16a-4b63-909c-fd8412342af3 is in state STARTED 2025-10-08 15:47:49.189961 | orchestrator | 2025-10-08 15:47:49 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:47:49.189995 | orchestrator | 2025-10-08 15:47:49 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:47:52.229831 | orchestrator | 2025-10-08 15:47:52 | INFO  | Task fccaece7-f16a-4b63-909c-fd8412342af3 is in state STARTED 2025-10-08 15:47:52.233670 | orchestrator | 2025-10-08 15:47:52 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:47:52.233710 | orchestrator | 2025-10-08 15:47:52 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:47:55.268119 | orchestrator | 2025-10-08 15:47:55 | INFO  | Task fccaece7-f16a-4b63-909c-fd8412342af3 is in state STARTED 2025-10-08 15:47:55.269177 | orchestrator | 2025-10-08 15:47:55 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:47:55.269255 | orchestrator | 2025-10-08 15:47:55 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:47:58.310209 | orchestrator | 2025-10-08 15:47:58 | INFO  | Task fccaece7-f16a-4b63-909c-fd8412342af3 is in state STARTED 2025-10-08 15:47:58.312585 | orchestrator | 2025-10-08 15:47:58 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:47:58.312684 | orchestrator | 2025-10-08 15:47:58 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:48:01.371819 | orchestrator | 2025-10-08 15:48:01 | INFO  | Task fccaece7-f16a-4b63-909c-fd8412342af3 is in state SUCCESS 2025-10-08 15:48:01.373956 | orchestrator | 2025-10-08 15:48:01.373999 | orchestrator | 2025-10-08 15:48:01.374011 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-10-08 15:48:01.374479 | orchestrator | 2025-10-08 15:48:01.374497 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-10-08 15:48:01.374508 | orchestrator | Wednesday 08 October 2025 15:41:25 +0000 (0:00:00.282) 0:00:00.282 ***** 2025-10-08 15:48:01.374519 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:48:01.374531 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:48:01.374543 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:48:01.374556 | orchestrator | 2025-10-08 15:48:01.374568 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-10-08 15:48:01.374579 | orchestrator | Wednesday 08 October 2025 15:41:26 +0000 (0:00:00.344) 0:00:00.627 ***** 2025-10-08 15:48:01.374591 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2025-10-08 15:48:01.374602 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2025-10-08 15:48:01.374614 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2025-10-08 15:48:01.374626 | orchestrator | 2025-10-08 15:48:01.374637 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2025-10-08 15:48:01.374648 | orchestrator | 2025-10-08 15:48:01.374658 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-10-08 15:48:01.374668 | orchestrator | Wednesday 08 October 2025 15:41:26 +0000 (0:00:00.408) 0:00:01.035 ***** 2025-10-08 15:48:01.374679 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-08 15:48:01.374689 | orchestrator | 2025-10-08 15:48:01.374700 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2025-10-08 15:48:01.374710 | orchestrator | Wednesday 08 October 2025 15:41:27 +0000 (0:00:00.588) 0:00:01.623 ***** 2025-10-08 15:48:01.374720 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:48:01.374730 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:48:01.374740 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:48:01.374751 | orchestrator | 2025-10-08 15:48:01.374761 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-10-08 15:48:01.374771 | orchestrator | Wednesday 08 October 2025 15:41:27 +0000 (0:00:00.566) 0:00:02.190 ***** 2025-10-08 15:48:01.374781 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-08 15:48:01.374792 | orchestrator | 2025-10-08 15:48:01.374802 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2025-10-08 15:48:01.374813 | orchestrator | Wednesday 08 October 2025 15:41:28 +0000 (0:00:00.792) 0:00:02.982 ***** 2025-10-08 15:48:01.374848 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:48:01.374859 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:48:01.374869 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:48:01.374879 | orchestrator | 2025-10-08 15:48:01.374889 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2025-10-08 15:48:01.374900 | orchestrator | Wednesday 08 October 2025 15:41:28 +0000 (0:00:00.585) 0:00:03.568 ***** 2025-10-08 15:48:01.374910 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-10-08 15:48:01.374921 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-10-08 15:48:01.374931 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-10-08 15:48:01.374954 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-10-08 15:48:01.374965 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-10-08 15:48:01.374975 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-10-08 15:48:01.374985 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-10-08 15:48:01.374996 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-10-08 15:48:01.375006 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-10-08 15:48:01.375016 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-10-08 15:48:01.375026 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-10-08 15:48:01.375036 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-10-08 15:48:01.375046 | orchestrator | 2025-10-08 15:48:01.375056 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-10-08 15:48:01.375077 | orchestrator | Wednesday 08 October 2025 15:41:31 +0000 (0:00:02.648) 0:00:06.217 ***** 2025-10-08 15:48:01.375087 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-10-08 15:48:01.375098 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-10-08 15:48:01.375108 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-10-08 15:48:01.375118 | orchestrator | 2025-10-08 15:48:01.375128 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-10-08 15:48:01.375237 | orchestrator | Wednesday 08 October 2025 15:41:32 +0000 (0:00:00.993) 0:00:07.211 ***** 2025-10-08 15:48:01.375249 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-10-08 15:48:01.375595 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-10-08 15:48:01.375606 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-10-08 15:48:01.375616 | orchestrator | 2025-10-08 15:48:01.375626 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-10-08 15:48:01.375636 | orchestrator | Wednesday 08 October 2025 15:41:34 +0000 (0:00:01.613) 0:00:08.824 ***** 2025-10-08 15:48:01.375645 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2025-10-08 15:48:01.375655 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:48:01.375681 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2025-10-08 15:48:01.375692 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:48:01.375703 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2025-10-08 15:48:01.375712 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:48:01.375722 | orchestrator | 2025-10-08 15:48:01.375732 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2025-10-08 15:48:01.375742 | orchestrator | Wednesday 08 October 2025 15:41:35 +0000 (0:00:01.116) 0:00:09.940 ***** 2025-10-08 15:48:01.375755 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-10-08 15:48:01.375783 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-10-08 15:48:01.375794 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-10-08 15:48:01.375811 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-10-08 15:48:01.375823 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-10-08 15:48:01.375841 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-10-08 15:48:01.375852 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-10-08 15:48:01.375869 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-10-08 15:48:01.375879 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-10-08 15:48:01.375889 | orchestrator | 2025-10-08 15:48:01.375899 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2025-10-08 15:48:01.375909 | orchestrator | Wednesday 08 October 2025 15:41:38 +0000 (0:00:03.113) 0:00:13.054 ***** 2025-10-08 15:48:01.375919 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:48:01.375929 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:48:01.375939 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:48:01.375948 | orchestrator | 2025-10-08 15:48:01.375958 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2025-10-08 15:48:01.375968 | orchestrator | Wednesday 08 October 2025 15:41:39 +0000 (0:00:01.542) 0:00:14.597 ***** 2025-10-08 15:48:01.375978 | orchestrator | changed: [testbed-node-2] => (item=users) 2025-10-08 15:48:01.375987 | orchestrator | changed: [testbed-node-1] => (item=users) 2025-10-08 15:48:01.375997 | orchestrator | changed: [testbed-node-0] => (item=users) 2025-10-08 15:48:01.376007 | orchestrator | changed: [testbed-node-2] => (item=rules) 2025-10-08 15:48:01.376017 | orchestrator | changed: [testbed-node-1] => (item=rules) 2025-10-08 15:48:01.376026 | orchestrator | changed: [testbed-node-0] => (item=rules) 2025-10-08 15:48:01.376036 | orchestrator | 2025-10-08 15:48:01.376050 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2025-10-08 15:48:01.376060 | orchestrator | Wednesday 08 October 2025 15:41:42 +0000 (0:00:02.156) 0:00:16.753 ***** 2025-10-08 15:48:01.376070 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:48:01.376080 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:48:01.376089 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:48:01.376099 | orchestrator | 2025-10-08 15:48:01.376109 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2025-10-08 15:48:01.376163 | orchestrator | Wednesday 08 October 2025 15:41:43 +0000 (0:00:01.494) 0:00:18.248 ***** 2025-10-08 15:48:01.376175 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:48:01.376357 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:48:01.376374 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:48:01.376385 | orchestrator | 2025-10-08 15:48:01.376527 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2025-10-08 15:48:01.376540 | orchestrator | Wednesday 08 October 2025 15:41:45 +0000 (0:00:01.517) 0:00:19.765 ***** 2025-10-08 15:48:01.376552 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-10-08 15:48:01.376750 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-10-08 15:48:01.376767 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-10-08 15:48:01.376779 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__18652d177f71a54b70fa24e2d2c805ba8d31bbca', '__omit_place_holder__18652d177f71a54b70fa24e2d2c805ba8d31bbca'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-10-08 15:48:01.376790 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:48:01.376801 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-10-08 15:48:01.376818 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-10-08 15:48:01.376830 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-10-08 15:48:01.376841 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__18652d177f71a54b70fa24e2d2c805ba8d31bbca', '__omit_place_holder__18652d177f71a54b70fa24e2d2c805ba8d31bbca'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-10-08 15:48:01.376859 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:48:01.376880 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-10-08 15:48:01.376891 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-10-08 15:48:01.376902 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-10-08 15:48:01.376913 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__18652d177f71a54b70fa24e2d2c805ba8d31bbca', '__omit_place_holder__18652d177f71a54b70fa24e2d2c805ba8d31bbca'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-10-08 15:48:01.376924 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:48:01.376934 | orchestrator | 2025-10-08 15:48:01.376949 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2025-10-08 15:48:01.376959 | orchestrator | Wednesday 08 October 2025 15:41:46 +0000 (0:00:01.671) 0:00:21.437 ***** 2025-10-08 15:48:01.376970 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-10-08 15:48:01.376986 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-10-08 15:48:01.377006 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-10-08 15:48:01.377017 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-10-08 15:48:01.377028 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-10-08 15:48:01.377039 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__18652d177f71a54b70fa24e2d2c805ba8d31bbca', '__omit_place_holder__18652d177f71a54b70fa24e2d2c805ba8d31bbca'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-10-08 15:48:01.377054 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-10-08 15:48:01.377078 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-10-08 15:48:01.377090 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__18652d177f71a54b70fa24e2d2c805ba8d31bbca', '__omit_place_holder__18652d177f71a54b70fa24e2d2c805ba8d31bbca'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-10-08 15:48:01.377107 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-10-08 15:48:01.377118 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-10-08 15:48:01.377185 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__18652d177f71a54b70fa24e2d2c805ba8d31bbca', '__omit_place_holder__18652d177f71a54b70fa24e2d2c805ba8d31bbca'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-10-08 15:48:01.377198 | orchestrator | 2025-10-08 15:48:01.377209 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2025-10-08 15:48:01.377219 | orchestrator | Wednesday 08 October 2025 15:41:51 +0000 (0:00:04.307) 0:00:25.744 ***** 2025-10-08 15:48:01.377234 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-10-08 15:48:01.377259 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-10-08 15:48:01.377270 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-10-08 15:48:01.377299 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-10-08 15:48:01.377311 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-10-08 15:48:01.377322 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-10-08 15:48:01.377332 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-10-08 15:48:01.377348 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-10-08 15:48:01.377365 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-10-08 15:48:01.377861 | orchestrator | 2025-10-08 15:48:01.377874 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2025-10-08 15:48:01.377884 | orchestrator | Wednesday 08 October 2025 15:41:54 +0000 (0:00:03.188) 0:00:28.933 ***** 2025-10-08 15:48:01.377894 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-10-08 15:48:01.377904 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-10-08 15:48:01.377914 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-10-08 15:48:01.377924 | orchestrator | 2025-10-08 15:48:01.377934 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2025-10-08 15:48:01.377944 | orchestrator | Wednesday 08 October 2025 15:41:57 +0000 (0:00:03.460) 0:00:32.393 ***** 2025-10-08 15:48:01.377954 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-10-08 15:48:01.377963 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-10-08 15:48:01.377973 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-10-08 15:48:01.377983 | orchestrator | 2025-10-08 15:48:01.381972 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2025-10-08 15:48:01.382069 | orchestrator | Wednesday 08 October 2025 15:42:02 +0000 (0:00:05.149) 0:00:37.543 ***** 2025-10-08 15:48:01.382082 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:48:01.382092 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:48:01.382101 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:48:01.382109 | orchestrator | 2025-10-08 15:48:01.382118 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2025-10-08 15:48:01.382127 | orchestrator | Wednesday 08 October 2025 15:42:03 +0000 (0:00:00.892) 0:00:38.436 ***** 2025-10-08 15:48:01.382162 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-10-08 15:48:01.382173 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-10-08 15:48:01.382181 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-10-08 15:48:01.382189 | orchestrator | 2025-10-08 15:48:01.382198 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2025-10-08 15:48:01.382206 | orchestrator | Wednesday 08 October 2025 15:42:05 +0000 (0:00:01.997) 0:00:40.434 ***** 2025-10-08 15:48:01.382214 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-10-08 15:48:01.382223 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-10-08 15:48:01.382231 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-10-08 15:48:01.382240 | orchestrator | 2025-10-08 15:48:01.382262 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2025-10-08 15:48:01.382270 | orchestrator | Wednesday 08 October 2025 15:42:09 +0000 (0:00:03.923) 0:00:44.358 ***** 2025-10-08 15:48:01.382279 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2025-10-08 15:48:01.382287 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2025-10-08 15:48:01.382295 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2025-10-08 15:48:01.382303 | orchestrator | 2025-10-08 15:48:01.382311 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2025-10-08 15:48:01.382320 | orchestrator | Wednesday 08 October 2025 15:42:12 +0000 (0:00:02.330) 0:00:46.688 ***** 2025-10-08 15:48:01.382328 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2025-10-08 15:48:01.382336 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2025-10-08 15:48:01.382354 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2025-10-08 15:48:01.382362 | orchestrator | 2025-10-08 15:48:01.382371 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-10-08 15:48:01.382379 | orchestrator | Wednesday 08 October 2025 15:42:14 +0000 (0:00:02.171) 0:00:48.859 ***** 2025-10-08 15:48:01.382387 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-08 15:48:01.382395 | orchestrator | 2025-10-08 15:48:01.382409 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2025-10-08 15:48:01.382418 | orchestrator | Wednesday 08 October 2025 15:42:15 +0000 (0:00:00.827) 0:00:49.687 ***** 2025-10-08 15:48:01.382707 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-10-08 15:48:01.382726 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-10-08 15:48:01.382748 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-10-08 15:48:01.382759 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-10-08 15:48:01.382782 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-10-08 15:48:01.382791 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-10-08 15:48:01.382807 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-10-08 15:48:01.382817 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-10-08 15:48:01.382826 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-10-08 15:48:01.382835 | orchestrator | 2025-10-08 15:48:01.382844 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2025-10-08 15:48:01.382853 | orchestrator | Wednesday 08 October 2025 15:42:18 +0000 (0:00:03.484) 0:00:53.171 ***** 2025-10-08 15:48:01.382869 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-10-08 15:48:01.382885 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-10-08 15:48:01.382895 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-10-08 15:48:01.382904 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:48:01.382912 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-10-08 15:48:01.382925 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-10-08 15:48:01.382933 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-10-08 15:48:01.382942 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:48:01.382950 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-10-08 15:48:01.382963 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-10-08 15:48:01.382978 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-10-08 15:48:01.382986 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:48:01.382994 | orchestrator | 2025-10-08 15:48:01.383002 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2025-10-08 15:48:01.383010 | orchestrator | Wednesday 08 October 2025 15:42:19 +0000 (0:00:00.921) 0:00:54.093 ***** 2025-10-08 15:48:01.383019 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-10-08 15:48:01.383031 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-10-08 15:48:01.383040 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-10-08 15:48:01.383048 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:48:01.383056 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-10-08 15:48:01.383070 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-10-08 15:48:01.383084 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-10-08 15:48:01.383092 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:48:01.383101 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-10-08 15:48:01.383109 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-10-08 15:48:01.383121 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-10-08 15:48:01.383181 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:48:01.383192 | orchestrator | 2025-10-08 15:48:01.383200 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-10-08 15:48:01.383208 | orchestrator | Wednesday 08 October 2025 15:42:20 +0000 (0:00:01.170) 0:00:55.263 ***** 2025-10-08 15:48:01.383216 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-10-08 15:48:01.383230 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-10-08 15:48:01.383244 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-10-08 15:48:01.383253 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-10-08 15:48:01.383261 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-10-08 15:48:01.383313 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:48:01.383610 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-10-08 15:48:01.383622 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:48:01.383629 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-10-08 15:48:01.383637 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-10-08 15:48:01.383656 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-10-08 15:48:01.383663 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:48:01.383670 | orchestrator | 2025-10-08 15:48:01.383724 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-10-08 15:48:01.383738 | orchestrator | Wednesday 08 October 2025 15:42:23 +0000 (0:00:03.121) 0:00:58.385 ***** 2025-10-08 15:48:01.383745 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-10-08 15:48:01.383753 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-10-08 15:48:01.383760 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-10-08 15:48:01.383767 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:48:01.383777 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-10-08 15:48:01.383784 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-10-08 15:48:01.383797 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-10-08 15:48:01.383804 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:48:01.383816 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-10-08 15:48:01.383824 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-10-08 15:48:01.383831 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-10-08 15:48:01.383838 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:48:01.383845 | orchestrator | 2025-10-08 15:48:01.383852 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-10-08 15:48:01.383858 | orchestrator | Wednesday 08 October 2025 15:42:25 +0000 (0:00:01.539) 0:00:59.924 ***** 2025-10-08 15:48:01.383869 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-10-08 15:48:01.383876 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-10-08 15:48:01.383889 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-10-08 15:48:01.383896 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:48:01.383908 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-10-08 15:48:01.383915 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-10-08 15:48:01.383922 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-10-08 15:48:01.383930 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:48:01.383937 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-10-08 15:48:01.383947 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-10-08 15:48:01.383961 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-10-08 15:48:01.383968 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:48:01.383975 | orchestrator | 2025-10-08 15:48:01.383991 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2025-10-08 15:48:01.383998 | orchestrator | Wednesday 08 October 2025 15:42:26 +0000 (0:00:01.051) 0:01:00.975 ***** 2025-10-08 15:48:01.384005 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-10-08 15:48:01.384017 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-10-08 15:48:01.384024 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-10-08 15:48:01.384031 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:48:01.384038 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-10-08 15:48:01.384045 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-10-08 15:48:01.384257 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-10-08 15:48:01.384268 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:48:01.384275 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-10-08 15:48:01.384288 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-10-08 15:48:01.384295 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-10-08 15:48:01.384302 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:48:01.384309 | orchestrator | 2025-10-08 15:48:01.384316 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2025-10-08 15:48:01.384323 | orchestrator | Wednesday 08 October 2025 15:42:27 +0000 (0:00:01.182) 0:01:02.158 ***** 2025-10-08 15:48:01.384330 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-10-08 15:48:01.384337 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-10-08 15:48:01.384353 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-10-08 15:48:01.384361 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:48:01.384367 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-10-08 15:48:01.384374 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-10-08 15:48:01.384388 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-10-08 15:48:01.384395 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:48:01.384402 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-10-08 15:48:01.384409 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-10-08 15:48:01.384416 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-10-08 15:48:01.384428 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:48:01.384435 | orchestrator | 2025-10-08 15:48:01.384442 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2025-10-08 15:48:01.384449 | orchestrator | Wednesday 08 October 2025 15:42:28 +0000 (0:00:00.740) 0:01:02.898 ***** 2025-10-08 15:48:01.384459 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-10-08 15:48:01.384467 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-10-08 15:48:01.384474 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-10-08 15:48:01.384482 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:48:01.384493 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-10-08 15:48:01.384500 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-10-08 15:48:01.384507 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-10-08 15:48:01.384519 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:48:01.384532 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-10-08 15:48:01.384539 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-10-08 15:48:01.384546 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-10-08 15:48:01.384554 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:48:01.384560 | orchestrator | 2025-10-08 15:48:01.384567 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2025-10-08 15:48:01.384574 | orchestrator | Wednesday 08 October 2025 15:42:29 +0000 (0:00:01.113) 0:01:04.012 ***** 2025-10-08 15:48:01.384581 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-10-08 15:48:01.384588 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-10-08 15:48:01.384599 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-10-08 15:48:01.384606 | orchestrator | 2025-10-08 15:48:01.384613 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2025-10-08 15:48:01.384620 | orchestrator | Wednesday 08 October 2025 15:42:31 +0000 (0:00:01.759) 0:01:05.771 ***** 2025-10-08 15:48:01.384626 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-10-08 15:48:01.384633 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-10-08 15:48:01.384640 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-10-08 15:48:01.384647 | orchestrator | 2025-10-08 15:48:01.384653 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2025-10-08 15:48:01.384660 | orchestrator | Wednesday 08 October 2025 15:42:33 +0000 (0:00:01.842) 0:01:07.614 ***** 2025-10-08 15:48:01.384667 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-10-08 15:48:01.384679 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-10-08 15:48:01.384686 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-10-08 15:48:01.384693 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-10-08 15:48:01.384699 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:48:01.384706 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-10-08 15:48:01.384713 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:48:01.384720 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-10-08 15:48:01.384727 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:48:01.384733 | orchestrator | 2025-10-08 15:48:01.384740 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2025-10-08 15:48:01.384747 | orchestrator | Wednesday 08 October 2025 15:42:34 +0000 (0:00:01.375) 0:01:08.989 ***** 2025-10-08 15:48:01.384754 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-10-08 15:48:01.384765 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-10-08 15:48:01.384772 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-10-08 15:48:01.384784 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-10-08 15:48:01.384792 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-10-08 15:48:01.384803 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-10-08 15:48:01.384811 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-10-08 15:48:01.384818 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-10-08 15:48:01.384828 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-10-08 15:48:01.384835 | orchestrator | 2025-10-08 15:48:01.384850 | orchestrator | TASK [include_role : aodh] ***************************************************** 2025-10-08 15:48:01.384857 | orchestrator | Wednesday 08 October 2025 15:42:37 +0000 (0:00:03.071) 0:01:12.061 ***** 2025-10-08 15:48:01.384864 | orchestrator | included: aodh for testbed-node-1, testbed-node-0, testbed-node-2 2025-10-08 15:48:01.384871 | orchestrator | 2025-10-08 15:48:01.384878 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2025-10-08 15:48:01.384885 | orchestrator | Wednesday 08 October 2025 15:42:38 +0000 (0:00:00.711) 0:01:12.773 ***** 2025-10-08 15:48:01.384893 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-10-08 15:48:01.384910 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-10-08 15:48:01.384918 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.384925 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-10-08 15:48:01.384935 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.384943 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-10-08 15:48:01.385257 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.388116 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.388175 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-10-08 15:48:01.388183 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-10-08 15:48:01.388190 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.388201 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.388207 | orchestrator | 2025-10-08 15:48:01.388214 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2025-10-08 15:48:01.388221 | orchestrator | Wednesday 08 October 2025 15:42:43 +0000 (0:00:05.426) 0:01:18.200 ***** 2025-10-08 15:48:01.388228 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-10-08 15:48:01.388246 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-10-08 15:48:01.388254 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.388260 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.388267 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:48:01.388274 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-10-08 15:48:01.388283 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-10-08 15:48:01.388290 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.388303 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-10-08 15:48:01.388310 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-10-08 15:48:01.388317 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.388323 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:48:01.388330 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.388339 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.388346 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:48:01.388353 | orchestrator | 2025-10-08 15:48:01.388359 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2025-10-08 15:48:01.388365 | orchestrator | Wednesday 08 October 2025 15:42:44 +0000 (0:00:00.990) 0:01:19.190 ***** 2025-10-08 15:48:01.388372 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-10-08 15:48:01.388380 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-10-08 15:48:01.388393 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:48:01.388399 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-10-08 15:48:01.388406 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-10-08 15:48:01.388412 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:48:01.388419 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-10-08 15:48:01.388430 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-10-08 15:48:01.388440 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:48:01.388450 | orchestrator | 2025-10-08 15:48:01.388464 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2025-10-08 15:48:01.388475 | orchestrator | Wednesday 08 October 2025 15:42:45 +0000 (0:00:01.110) 0:01:20.300 ***** 2025-10-08 15:48:01.388484 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:48:01.388491 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:48:01.388497 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:48:01.388503 | orchestrator | 2025-10-08 15:48:01.388510 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2025-10-08 15:48:01.388516 | orchestrator | Wednesday 08 October 2025 15:42:47 +0000 (0:00:01.431) 0:01:21.732 ***** 2025-10-08 15:48:01.388522 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:48:01.388529 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:48:01.388535 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:48:01.388541 | orchestrator | 2025-10-08 15:48:01.388547 | orchestrator | TASK [include_role : barbican] ************************************************* 2025-10-08 15:48:01.388554 | orchestrator | Wednesday 08 October 2025 15:42:49 +0000 (0:00:02.045) 0:01:23.778 ***** 2025-10-08 15:48:01.388560 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-08 15:48:01.388566 | orchestrator | 2025-10-08 15:48:01.388572 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2025-10-08 15:48:01.388578 | orchestrator | Wednesday 08 October 2025 15:42:50 +0000 (0:00:00.948) 0:01:24.726 ***** 2025-10-08 15:48:01.388586 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-10-08 15:48:01.388597 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.388609 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.388615 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-10-08 15:48:01.388626 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.388633 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.388639 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-10-08 15:48:01.388652 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.388659 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.388666 | orchestrator | 2025-10-08 15:48:01.388672 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2025-10-08 15:48:01.388679 | orchestrator | Wednesday 08 October 2025 15:42:55 +0000 (0:00:05.000) 0:01:29.726 ***** 2025-10-08 15:48:01.388689 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-10-08 15:48:01.388696 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.388703 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.388709 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:48:01.388720 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-10-08 15:48:01.388731 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.388739 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.388746 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:48:01.388758 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-10-08 15:48:01.388766 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.388774 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.388785 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:48:01.388793 | orchestrator | 2025-10-08 15:48:01.388800 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2025-10-08 15:48:01.388806 | orchestrator | Wednesday 08 October 2025 15:42:55 +0000 (0:00:00.629) 0:01:30.356 ***** 2025-10-08 15:48:01.388813 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-10-08 15:48:01.388824 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-10-08 15:48:01.388831 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:48:01.388838 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-10-08 15:48:01.388844 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-10-08 15:48:01.388851 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:48:01.388857 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-10-08 15:48:01.388864 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-10-08 15:48:01.388870 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:48:01.388877 | orchestrator | 2025-10-08 15:48:01.388883 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2025-10-08 15:48:01.388889 | orchestrator | Wednesday 08 October 2025 15:42:56 +0000 (0:00:01.196) 0:01:31.552 ***** 2025-10-08 15:48:01.388895 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:48:01.388902 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:48:01.388908 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:48:01.388914 | orchestrator | 2025-10-08 15:48:01.388920 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2025-10-08 15:48:01.388927 | orchestrator | Wednesday 08 October 2025 15:42:58 +0000 (0:00:01.424) 0:01:32.976 ***** 2025-10-08 15:48:01.388933 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:48:01.388939 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:48:01.388946 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:48:01.388952 | orchestrator | 2025-10-08 15:48:01.388961 | orchestrator | TASK [include_role : blazar] *************************************************** 2025-10-08 15:48:01.388968 | orchestrator | Wednesday 08 October 2025 15:43:00 +0000 (0:00:02.095) 0:01:35.071 ***** 2025-10-08 15:48:01.388974 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:48:01.388980 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:48:01.388987 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:48:01.388993 | orchestrator | 2025-10-08 15:48:01.388999 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2025-10-08 15:48:01.389005 | orchestrator | Wednesday 08 October 2025 15:43:00 +0000 (0:00:00.354) 0:01:35.426 ***** 2025-10-08 15:48:01.389012 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-08 15:48:01.389018 | orchestrator | 2025-10-08 15:48:01.389024 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2025-10-08 15:48:01.389034 | orchestrator | Wednesday 08 October 2025 15:43:01 +0000 (0:00:00.916) 0:01:36.343 ***** 2025-10-08 15:48:01.389041 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-10-08 15:48:01.389048 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-10-08 15:48:01.389057 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-10-08 15:48:01.389064 | orchestrator | 2025-10-08 15:48:01.389071 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2025-10-08 15:48:01.389077 | orchestrator | Wednesday 08 October 2025 15:43:04 +0000 (0:00:02.624) 0:01:38.968 ***** 2025-10-08 15:48:01.389087 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-10-08 15:48:01.389094 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:48:01.389101 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-10-08 15:48:01.389110 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:48:01.389117 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-10-08 15:48:01.389124 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:48:01.389144 | orchestrator | 2025-10-08 15:48:01.389150 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2025-10-08 15:48:01.389157 | orchestrator | Wednesday 08 October 2025 15:43:05 +0000 (0:00:01.599) 0:01:40.568 ***** 2025-10-08 15:48:01.389164 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-10-08 15:48:01.389172 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-10-08 15:48:01.389180 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:48:01.389186 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-10-08 15:48:01.389193 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-10-08 15:48:01.389200 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:48:01.389209 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-10-08 15:48:01.389220 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-10-08 15:48:01.389226 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:48:01.389233 | orchestrator | 2025-10-08 15:48:01.389239 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2025-10-08 15:48:01.389245 | orchestrator | Wednesday 08 October 2025 15:43:07 +0000 (0:00:01.739) 0:01:42.307 ***** 2025-10-08 15:48:01.389252 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:48:01.389258 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:48:01.389264 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:48:01.389270 | orchestrator | 2025-10-08 15:48:01.389276 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2025-10-08 15:48:01.389312 | orchestrator | Wednesday 08 October 2025 15:43:08 +0000 (0:00:00.680) 0:01:42.988 ***** 2025-10-08 15:48:01.389323 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:48:01.389329 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:48:01.389335 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:48:01.389342 | orchestrator | 2025-10-08 15:48:01.389348 | orchestrator | TASK [include_role : cinder] *************************************************** 2025-10-08 15:48:01.389354 | orchestrator | Wednesday 08 October 2025 15:43:09 +0000 (0:00:01.379) 0:01:44.367 ***** 2025-10-08 15:48:01.389360 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-08 15:48:01.389367 | orchestrator | 2025-10-08 15:48:01.389373 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2025-10-08 15:48:01.389379 | orchestrator | Wednesday 08 October 2025 15:43:10 +0000 (0:00:00.754) 0:01:45.121 ***** 2025-10-08 15:48:01.389388 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-10-08 15:48:01.389396 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.389402 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.389418 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-10-08 15:48:01.389425 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.389432 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.389441 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.389448 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.389462 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-10-08 15:48:01.389468 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.389475 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.389484 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.389491 | orchestrator | 2025-10-08 15:48:01.389497 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2025-10-08 15:48:01.389504 | orchestrator | Wednesday 08 October 2025 15:43:14 +0000 (0:00:04.179) 0:01:49.301 ***** 2025-10-08 15:48:01.389510 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-10-08 15:48:01.389520 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.389530 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.389537 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.389544 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:48:01.389553 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-10-08 15:48:01.389560 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.389570 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.389580 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.389587 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:48:01.389594 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-10-08 15:48:01.389600 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.389609 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.389619 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.389626 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:48:01.389632 | orchestrator | 2025-10-08 15:48:01.389638 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2025-10-08 15:48:01.389645 | orchestrator | Wednesday 08 October 2025 15:43:15 +0000 (0:00:01.137) 0:01:50.438 ***** 2025-10-08 15:48:01.389651 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-10-08 15:48:01.389661 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-10-08 15:48:01.389668 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:48:01.389675 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-10-08 15:48:01.389681 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-10-08 15:48:01.389687 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:48:01.389694 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-10-08 15:48:01.389700 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-10-08 15:48:01.389706 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:48:01.389713 | orchestrator | 2025-10-08 15:48:01.389719 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2025-10-08 15:48:01.389725 | orchestrator | Wednesday 08 October 2025 15:43:16 +0000 (0:00:00.964) 0:01:51.403 ***** 2025-10-08 15:48:01.389731 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:48:01.389737 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:48:01.389743 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:48:01.389750 | orchestrator | 2025-10-08 15:48:01.389756 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2025-10-08 15:48:01.389762 | orchestrator | Wednesday 08 October 2025 15:43:18 +0000 (0:00:01.361) 0:01:52.764 ***** 2025-10-08 15:48:01.389768 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:48:01.389774 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:48:01.389781 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:48:01.389787 | orchestrator | 2025-10-08 15:48:01.389793 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2025-10-08 15:48:01.389804 | orchestrator | Wednesday 08 October 2025 15:43:20 +0000 (0:00:02.080) 0:01:54.844 ***** 2025-10-08 15:48:01.389811 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:48:01.389817 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:48:01.389823 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:48:01.389829 | orchestrator | 2025-10-08 15:48:01.389835 | orchestrator | TASK [include_role : cyborg] *************************************************** 2025-10-08 15:48:01.389842 | orchestrator | Wednesday 08 October 2025 15:43:20 +0000 (0:00:00.575) 0:01:55.420 ***** 2025-10-08 15:48:01.389848 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:48:01.389854 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:48:01.389860 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:48:01.389866 | orchestrator | 2025-10-08 15:48:01.389872 | orchestrator | TASK [include_role : designate] ************************************************ 2025-10-08 15:48:01.389881 | orchestrator | Wednesday 08 October 2025 15:43:21 +0000 (0:00:00.359) 0:01:55.779 ***** 2025-10-08 15:48:01.389887 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-08 15:48:01.389894 | orchestrator | 2025-10-08 15:48:01.389900 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2025-10-08 15:48:01.389906 | orchestrator | Wednesday 08 October 2025 15:43:22 +0000 (0:00:00.838) 0:01:56.617 ***** 2025-10-08 15:48:01.389913 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-10-08 15:48:01.389923 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-10-08 15:48:01.389930 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.389937 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.389948 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.389959 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.389966 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.389972 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-10-08 15:48:01.391683 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-10-08 15:48:01.391708 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.391722 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.391728 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.391737 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.391743 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.391755 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-10-08 15:48:01.391761 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-10-08 15:48:01.391770 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.391776 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.391784 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.391790 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.391795 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.391801 | orchestrator | 2025-10-08 15:48:01.391807 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2025-10-08 15:48:01.391813 | orchestrator | Wednesday 08 October 2025 15:43:27 +0000 (0:00:05.107) 0:02:01.725 ***** 2025-10-08 15:48:01.391822 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-10-08 15:48:01.391831 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-10-08 15:48:01.391836 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.391844 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.391850 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.391856 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.391865 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.391871 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:48:01.391877 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-10-08 15:48:01.391888 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-10-08 15:48:01.391896 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-10-08 15:48:01.391902 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-10-08 15:48:01.391908 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.391917 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.391926 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.391932 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.391938 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.391946 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.391952 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.391958 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.391968 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.391977 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.391983 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:48:01.391988 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:48:01.391994 | orchestrator | 2025-10-08 15:48:01.391999 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2025-10-08 15:48:01.392005 | orchestrator | Wednesday 08 October 2025 15:43:27 +0000 (0:00:00.850) 0:02:02.576 ***** 2025-10-08 15:48:01.392010 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-10-08 15:48:01.392016 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-10-08 15:48:01.392023 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:48:01.392028 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-10-08 15:48:01.392034 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-10-08 15:48:01.392040 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:48:01.392045 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-10-08 15:48:01.392053 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-10-08 15:48:01.392058 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:48:01.392064 | orchestrator | 2025-10-08 15:48:01.392069 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2025-10-08 15:48:01.392075 | orchestrator | Wednesday 08 October 2025 15:43:28 +0000 (0:00:00.992) 0:02:03.568 ***** 2025-10-08 15:48:01.392080 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:48:01.392086 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:48:01.392091 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:48:01.392097 | orchestrator | 2025-10-08 15:48:01.392102 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2025-10-08 15:48:01.392108 | orchestrator | Wednesday 08 October 2025 15:43:30 +0000 (0:00:01.648) 0:02:05.217 ***** 2025-10-08 15:48:01.392113 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:48:01.392118 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:48:01.392124 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:48:01.392176 | orchestrator | 2025-10-08 15:48:01.392183 | orchestrator | TASK [include_role : etcd] ***************************************************** 2025-10-08 15:48:01.392192 | orchestrator | Wednesday 08 October 2025 15:43:32 +0000 (0:00:01.827) 0:02:07.044 ***** 2025-10-08 15:48:01.392198 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:48:01.392203 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:48:01.392209 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:48:01.392214 | orchestrator | 2025-10-08 15:48:01.392220 | orchestrator | TASK [include_role : glance] *************************************************** 2025-10-08 15:48:01.392225 | orchestrator | Wednesday 08 October 2025 15:43:32 +0000 (0:00:00.545) 0:02:07.589 ***** 2025-10-08 15:48:01.392231 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-08 15:48:01.392236 | orchestrator | 2025-10-08 15:48:01.392241 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2025-10-08 15:48:01.392247 | orchestrator | Wednesday 08 October 2025 15:43:33 +0000 (0:00:00.856) 0:02:08.446 ***** 2025-10-08 15:48:01.392259 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-10-08 15:48:01.392269 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-10-08 15:48:01.392285 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-10-08 15:48:01.392296 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-10-08 15:48:01.392312 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-10-08 15:48:01.392320 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-10-08 15:48:01.392326 | orchestrator | 2025-10-08 15:48:01.392333 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2025-10-08 15:48:01.392339 | orchestrator | Wednesday 08 October 2025 15:43:38 +0000 (0:00:04.546) 0:02:12.992 ***** 2025-10-08 15:48:01.392348 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-10-08 15:48:01.392357 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-10-08 15:48:01.392364 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:48:01.392372 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-10-08 15:48:01.392384 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-10-08 15:48:01.392391 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:48:01.392399 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-10-08 15:48:01.392412 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-10-08 15:48:01.392418 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:48:01.392424 | orchestrator | 2025-10-08 15:48:01.392429 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2025-10-08 15:48:01.392435 | orchestrator | Wednesday 08 October 2025 15:43:42 +0000 (0:00:03.684) 0:02:16.677 ***** 2025-10-08 15:48:01.392441 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-10-08 15:48:01.392447 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-10-08 15:48:01.392455 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:48:01.392463 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-10-08 15:48:01.392469 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-10-08 15:48:01.392475 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:48:01.392480 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-10-08 15:48:01.392489 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-10-08 15:48:01.392495 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:48:01.392501 | orchestrator | 2025-10-08 15:48:01.392506 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2025-10-08 15:48:01.392511 | orchestrator | Wednesday 08 October 2025 15:43:45 +0000 (0:00:03.814) 0:02:20.491 ***** 2025-10-08 15:48:01.392516 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:48:01.392520 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:48:01.392525 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:48:01.392530 | orchestrator | 2025-10-08 15:48:01.392535 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2025-10-08 15:48:01.392540 | orchestrator | Wednesday 08 October 2025 15:43:47 +0000 (0:00:01.319) 0:02:21.811 ***** 2025-10-08 15:48:01.392545 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:48:01.392550 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:48:01.392554 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:48:01.392559 | orchestrator | 2025-10-08 15:48:01.392564 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2025-10-08 15:48:01.392569 | orchestrator | Wednesday 08 October 2025 15:43:49 +0000 (0:00:02.137) 0:02:23.948 ***** 2025-10-08 15:48:01.392574 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:48:01.392579 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:48:01.392583 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:48:01.392588 | orchestrator | 2025-10-08 15:48:01.392593 | orchestrator | TASK [include_role : grafana] ************************************************** 2025-10-08 15:48:01.392598 | orchestrator | Wednesday 08 October 2025 15:43:49 +0000 (0:00:00.584) 0:02:24.532 ***** 2025-10-08 15:48:01.392606 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-08 15:48:01.392611 | orchestrator | 2025-10-08 15:48:01.392615 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2025-10-08 15:48:01.392620 | orchestrator | Wednesday 08 October 2025 15:43:50 +0000 (0:00:00.901) 0:02:25.434 ***** 2025-10-08 15:48:01.392625 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-10-08 15:48:01.392633 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-10-08 15:48:01.392638 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-10-08 15:48:01.392643 | orchestrator | 2025-10-08 15:48:01.392648 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2025-10-08 15:48:01.392653 | orchestrator | Wednesday 08 October 2025 15:43:54 +0000 (0:00:03.660) 0:02:29.095 ***** 2025-10-08 15:48:01.392661 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-10-08 15:48:01.392667 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-10-08 15:48:01.392674 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:48:01.392679 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:48:01.392684 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-10-08 15:48:01.392690 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:48:01.392694 | orchestrator | 2025-10-08 15:48:01.392699 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2025-10-08 15:48:01.392704 | orchestrator | Wednesday 08 October 2025 15:43:55 +0000 (0:00:00.764) 0:02:29.859 ***** 2025-10-08 15:48:01.392709 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-10-08 15:48:01.392716 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-10-08 15:48:01.392721 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-10-08 15:48:01.392726 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:48:01.392731 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-10-08 15:48:01.392736 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:48:01.392740 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-10-08 15:48:01.392745 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-10-08 15:48:01.392750 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:48:01.392755 | orchestrator | 2025-10-08 15:48:01.392760 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2025-10-08 15:48:01.392765 | orchestrator | Wednesday 08 October 2025 15:43:55 +0000 (0:00:00.721) 0:02:30.581 ***** 2025-10-08 15:48:01.392770 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:48:01.392774 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:48:01.392779 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:48:01.392784 | orchestrator | 2025-10-08 15:48:01.392789 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2025-10-08 15:48:01.392794 | orchestrator | Wednesday 08 October 2025 15:43:57 +0000 (0:00:01.420) 0:02:32.001 ***** 2025-10-08 15:48:01.392798 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:48:01.392803 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:48:01.392808 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:48:01.392813 | orchestrator | 2025-10-08 15:48:01.392818 | orchestrator | TASK [include_role : heat] ***************************************************** 2025-10-08 15:48:01.392823 | orchestrator | Wednesday 08 October 2025 15:43:59 +0000 (0:00:02.093) 0:02:34.095 ***** 2025-10-08 15:48:01.392827 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:48:01.392832 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:48:01.392839 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:48:01.392848 | orchestrator | 2025-10-08 15:48:01.392853 | orchestrator | TASK [include_role : horizon] ************************************************** 2025-10-08 15:48:01.392858 | orchestrator | Wednesday 08 October 2025 15:44:00 +0000 (0:00:00.551) 0:02:34.646 ***** 2025-10-08 15:48:01.392863 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-08 15:48:01.392868 | orchestrator | 2025-10-08 15:48:01.392873 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2025-10-08 15:48:01.392878 | orchestrator | Wednesday 08 October 2025 15:44:01 +0000 (0:00:01.075) 0:02:35.721 ***** 2025-10-08 15:48:01.392883 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-10-08 15:48:01.392917 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-10-08 15:48:01.392932 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-10-08 15:48:01.392938 | orchestrator | 2025-10-08 15:48:01.392943 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2025-10-08 15:48:01.392948 | orchestrator | Wednesday 08 October 2025 15:44:05 +0000 (0:00:04.606) 0:02:40.328 ***** 2025-10-08 15:48:01.393039 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_c2025-10-08 15:48:01 | INFO  | Task c525e092-38d1-48a7-9760-fda7f6b63900 is in state STARTED 2025-10-08 15:48:01.393052 | orchestrator | 2025-10-08 15:48:01 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:48:01.393057 | orchestrator | 2025-10-08 15:48:01 | INFO  | Task 1aac30e0-8d96-46d0-8b24-1d528accbfcc is in state STARTED 2025-10-08 15:48:01.393062 | orchestrator | 2025-10-08 15:48:01 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:48:01.393067 | orchestrator | lient_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-10-08 15:48:01.393073 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:48:01.393081 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-10-08 15:48:01.393089 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:48:01.393099 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-10-08 15:48:01.393105 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:48:01.393110 | orchestrator | 2025-10-08 15:48:01.393115 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2025-10-08 15:48:01.393120 | orchestrator | Wednesday 08 October 2025 15:44:07 +0000 (0:00:01.607) 0:02:41.935 ***** 2025-10-08 15:48:01.393125 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-10-08 15:48:01.393146 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-10-08 15:48:01.393151 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-10-08 15:48:01.393157 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-10-08 15:48:01.393165 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-10-08 15:48:01.393171 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-10-08 15:48:01.393179 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-10-08 15:48:01.393184 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-10-08 15:48:01.393189 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-10-08 15:48:01.393194 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-10-08 15:48:01.393199 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-10-08 15:48:01.393204 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-10-08 15:48:01.393209 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:48:01.393214 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:48:01.393219 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-10-08 15:48:01.393224 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-10-08 15:48:01.393231 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-10-08 15:48:01.393236 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:48:01.393241 | orchestrator | 2025-10-08 15:48:01.393246 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2025-10-08 15:48:01.393251 | orchestrator | Wednesday 08 October 2025 15:44:08 +0000 (0:00:01.064) 0:02:43.000 ***** 2025-10-08 15:48:01.393255 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:48:01.393263 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:48:01.393268 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:48:01.393273 | orchestrator | 2025-10-08 15:48:01.393277 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2025-10-08 15:48:01.393282 | orchestrator | Wednesday 08 October 2025 15:44:09 +0000 (0:00:01.308) 0:02:44.308 ***** 2025-10-08 15:48:01.393287 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:48:01.393292 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:48:01.393297 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:48:01.393302 | orchestrator | 2025-10-08 15:48:01.393306 | orchestrator | TASK [include_role : influxdb] ************************************************* 2025-10-08 15:48:01.393311 | orchestrator | Wednesday 08 October 2025 15:44:11 +0000 (0:00:02.290) 0:02:46.599 ***** 2025-10-08 15:48:01.393316 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:48:01.393321 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:48:01.393325 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:48:01.393330 | orchestrator | 2025-10-08 15:48:01.393335 | orchestrator | TASK [include_role : ironic] *************************************************** 2025-10-08 15:48:01.393340 | orchestrator | Wednesday 08 October 2025 15:44:12 +0000 (0:00:00.307) 0:02:46.906 ***** 2025-10-08 15:48:01.393345 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:48:01.393349 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:48:01.393354 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:48:01.393359 | orchestrator | 2025-10-08 15:48:01.393364 | orchestrator | TASK [include_role : keystone] ************************************************* 2025-10-08 15:48:01.393369 | orchestrator | Wednesday 08 October 2025 15:44:12 +0000 (0:00:00.592) 0:02:47.499 ***** 2025-10-08 15:48:01.393373 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-08 15:48:01.393378 | orchestrator | 2025-10-08 15:48:01.393383 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2025-10-08 15:48:01.393388 | orchestrator | Wednesday 08 October 2025 15:44:13 +0000 (0:00:00.960) 0:02:48.460 ***** 2025-10-08 15:48:01.393396 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-10-08 15:48:01.393402 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-10-08 15:48:01.393408 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-10-08 15:48:01.393418 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-10-08 15:48:01.393424 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-10-08 15:48:01.393432 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-10-08 15:48:01.393437 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-10-08 15:48:01.393443 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-10-08 15:48:01.393452 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-10-08 15:48:01.393458 | orchestrator | 2025-10-08 15:48:01.393463 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2025-10-08 15:48:01.393468 | orchestrator | Wednesday 08 October 2025 15:44:17 +0000 (0:00:03.953) 0:02:52.413 ***** 2025-10-08 15:48:01.393473 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-10-08 15:48:01.393481 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-10-08 15:48:01.393524 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-10-08 15:48:01.393529 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:48:01.393549 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-10-08 15:48:01.393565 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-10-08 15:48:01.393570 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-10-08 15:48:01.393575 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:48:01.393583 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-10-08 15:48:01.393589 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-10-08 15:48:01.393594 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-10-08 15:48:01.393603 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:48:01.393608 | orchestrator | 2025-10-08 15:48:01.393613 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2025-10-08 15:48:01.393618 | orchestrator | Wednesday 08 October 2025 15:44:18 +0000 (0:00:00.628) 0:02:53.042 ***** 2025-10-08 15:48:01.393623 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-10-08 15:48:01.393629 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-10-08 15:48:01.393634 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:48:01.393639 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-10-08 15:48:01.393646 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-10-08 15:48:01.393651 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:48:01.393656 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-10-08 15:48:01.393663 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-10-08 15:48:01.393668 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:48:01.393674 | orchestrator | 2025-10-08 15:48:01.393679 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2025-10-08 15:48:01.393685 | orchestrator | Wednesday 08 October 2025 15:44:19 +0000 (0:00:00.830) 0:02:53.872 ***** 2025-10-08 15:48:01.393691 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:48:01.393696 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:48:01.393702 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:48:01.393707 | orchestrator | 2025-10-08 15:48:01.393713 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2025-10-08 15:48:01.393719 | orchestrator | Wednesday 08 October 2025 15:44:20 +0000 (0:00:01.390) 0:02:55.262 ***** 2025-10-08 15:48:01.393724 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:48:01.393730 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:48:01.393736 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:48:01.393741 | orchestrator | 2025-10-08 15:48:01.393747 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2025-10-08 15:48:01.393752 | orchestrator | Wednesday 08 October 2025 15:44:23 +0000 (0:00:02.401) 0:02:57.663 ***** 2025-10-08 15:48:01.393758 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:48:01.393763 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:48:01.393769 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:48:01.393774 | orchestrator | 2025-10-08 15:48:01.393782 | orchestrator | TASK [include_role : magnum] *************************************************** 2025-10-08 15:48:01.393788 | orchestrator | Wednesday 08 October 2025 15:44:23 +0000 (0:00:00.617) 0:02:58.281 ***** 2025-10-08 15:48:01.393793 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-08 15:48:01.393799 | orchestrator | 2025-10-08 15:48:01.393804 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2025-10-08 15:48:01.393813 | orchestrator | Wednesday 08 October 2025 15:44:24 +0000 (0:00:01.023) 0:02:59.305 ***** 2025-10-08 15:48:01.393819 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-10-08 15:48:01.393825 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.393834 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-10-08 15:48:01.393840 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.393850 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-10-08 15:48:01.393861 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.393867 | orchestrator | 2025-10-08 15:48:01.393873 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2025-10-08 15:48:01.393878 | orchestrator | Wednesday 08 October 2025 15:44:28 +0000 (0:00:03.730) 0:03:03.036 ***** 2025-10-08 15:48:01.393884 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-10-08 15:48:01.393892 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.393898 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:48:01.393904 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-10-08 15:48:01.393916 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.393922 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:48:01.393928 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-10-08 15:48:01.393934 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.393939 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:48:01.393945 | orchestrator | 2025-10-08 15:48:01.393953 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2025-10-08 15:48:01.393958 | orchestrator | Wednesday 08 October 2025 15:44:29 +0000 (0:00:01.093) 0:03:04.129 ***** 2025-10-08 15:48:01.393964 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-10-08 15:48:01.393970 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-10-08 15:48:01.393976 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-10-08 15:48:01.393982 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:48:01.393988 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-10-08 15:48:01.393993 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:48:01.393999 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-10-08 15:48:01.394008 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-10-08 15:48:01.394042 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:48:01.394048 | orchestrator | 2025-10-08 15:48:01.394053 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2025-10-08 15:48:01.394059 | orchestrator | Wednesday 08 October 2025 15:44:30 +0000 (0:00:00.993) 0:03:05.123 ***** 2025-10-08 15:48:01.394064 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:48:01.394069 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:48:01.394074 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:48:01.394079 | orchestrator | 2025-10-08 15:48:01.394087 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2025-10-08 15:48:01.394092 | orchestrator | Wednesday 08 October 2025 15:44:31 +0000 (0:00:01.398) 0:03:06.522 ***** 2025-10-08 15:48:01.394097 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:48:01.394102 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:48:01.394106 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:48:01.394111 | orchestrator | 2025-10-08 15:48:01.394116 | orchestrator | TASK [include_role : manila] *************************************************** 2025-10-08 15:48:01.394121 | orchestrator | Wednesday 08 October 2025 15:44:33 +0000 (0:00:02.063) 0:03:08.585 ***** 2025-10-08 15:48:01.394126 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-08 15:48:01.394170 | orchestrator | 2025-10-08 15:48:01.394176 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2025-10-08 15:48:01.394181 | orchestrator | Wednesday 08 October 2025 15:44:35 +0000 (0:00:01.381) 0:03:09.966 ***** 2025-10-08 15:48:01.394186 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-10-08 15:48:01.394191 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.394199 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.394204 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.394216 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-10-08 15:48:01.394222 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.394227 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.394232 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.394240 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-10-08 15:48:01.394248 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.394253 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.394262 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.394267 | orchestrator | 2025-10-08 15:48:01.394272 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2025-10-08 15:48:01.394277 | orchestrator | Wednesday 08 October 2025 15:44:38 +0000 (0:00:03.578) 0:03:13.545 ***** 2025-10-08 15:48:01.394283 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-10-08 15:48:01.394288 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.394296 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.394304 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.394309 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:48:01.394317 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-10-08 15:48:01.394323 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.394328 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.394333 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.394338 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:48:01.394345 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-10-08 15:48:01.394354 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.394359 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.394367 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.394372 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:48:01.394377 | orchestrator | 2025-10-08 15:48:01.394382 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2025-10-08 15:48:01.394387 | orchestrator | Wednesday 08 October 2025 15:44:39 +0000 (0:00:00.771) 0:03:14.317 ***** 2025-10-08 15:48:01.394392 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-10-08 15:48:01.394397 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-10-08 15:48:01.394402 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-10-08 15:48:01.394408 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:48:01.394412 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-10-08 15:48:01.394417 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:48:01.394422 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-10-08 15:48:01.394430 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-10-08 15:48:01.394435 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:48:01.394440 | orchestrator | 2025-10-08 15:48:01.394445 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2025-10-08 15:48:01.394450 | orchestrator | Wednesday 08 October 2025 15:44:41 +0000 (0:00:01.352) 0:03:15.669 ***** 2025-10-08 15:48:01.394455 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:48:01.394459 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:48:01.394466 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:48:01.394471 | orchestrator | 2025-10-08 15:48:01.394476 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2025-10-08 15:48:01.394481 | orchestrator | Wednesday 08 October 2025 15:44:42 +0000 (0:00:01.305) 0:03:16.974 ***** 2025-10-08 15:48:01.394486 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:48:01.394491 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:48:01.394495 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:48:01.394500 | orchestrator | 2025-10-08 15:48:01.394505 | orchestrator | TASK [include_role : mariadb] ************************************************** 2025-10-08 15:48:01.394510 | orchestrator | Wednesday 08 October 2025 15:44:44 +0000 (0:00:02.008) 0:03:18.983 ***** 2025-10-08 15:48:01.394515 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-08 15:48:01.394520 | orchestrator | 2025-10-08 15:48:01.394525 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2025-10-08 15:48:01.394529 | orchestrator | Wednesday 08 October 2025 15:44:45 +0000 (0:00:01.374) 0:03:20.357 ***** 2025-10-08 15:48:01.394535 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-10-08 15:48:01.394539 | orchestrator | 2025-10-08 15:48:01.394544 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2025-10-08 15:48:01.394549 | orchestrator | Wednesday 08 October 2025 15:44:48 +0000 (0:00:02.610) 0:03:22.968 ***** 2025-10-08 15:48:01.394566 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-10-08 15:48:01.394575 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-10-08 15:48:01.394581 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:48:01.394588 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-10-08 15:48:01.394594 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-10-08 15:48:01.394602 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:48:01.394608 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-10-08 15:48:01.394618 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-10-08 15:48:01.394623 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:48:01.394628 | orchestrator | 2025-10-08 15:48:01.394633 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2025-10-08 15:48:01.394638 | orchestrator | Wednesday 08 October 2025 15:44:50 +0000 (0:00:02.170) 0:03:25.139 ***** 2025-10-08 15:48:01.394647 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-10-08 15:48:01.394653 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-10-08 15:48:01.394661 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:48:01.394668 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-10-08 15:48:01.394674 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-10-08 15:48:01.394679 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:48:01.394690 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-10-08 15:48:01.394700 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-10-08 15:48:01.394705 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:48:01.394710 | orchestrator | 2025-10-08 15:48:01.394715 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2025-10-08 15:48:01.394720 | orchestrator | Wednesday 08 October 2025 15:44:52 +0000 (0:00:02.326) 0:03:27.465 ***** 2025-10-08 15:48:01.394727 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-10-08 15:48:01.394733 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-10-08 15:48:01.394738 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:48:01.394743 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-10-08 15:48:01.394751 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-10-08 15:48:01.394832 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:48:01.394840 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-10-08 15:48:01.394845 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-10-08 15:48:01.394851 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:48:01.394856 | orchestrator | 2025-10-08 15:48:01.394861 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2025-10-08 15:48:01.394866 | orchestrator | Wednesday 08 October 2025 15:44:55 +0000 (0:00:03.014) 0:03:30.480 ***** 2025-10-08 15:48:01.394871 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:48:01.394876 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:48:01.394881 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:48:01.394885 | orchestrator | 2025-10-08 15:48:01.394890 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2025-10-08 15:48:01.394895 | orchestrator | Wednesday 08 October 2025 15:44:57 +0000 (0:00:01.921) 0:03:32.402 ***** 2025-10-08 15:48:01.394900 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:48:01.394905 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:48:01.394910 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:48:01.394915 | orchestrator | 2025-10-08 15:48:01.394920 | orchestrator | TASK [include_role : masakari] ************************************************* 2025-10-08 15:48:01.394925 | orchestrator | Wednesday 08 October 2025 15:44:59 +0000 (0:00:01.453) 0:03:33.855 ***** 2025-10-08 15:48:01.394930 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:48:01.394938 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:48:01.394944 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:48:01.394948 | orchestrator | 2025-10-08 15:48:01.394953 | orchestrator | TASK [include_role : memcached] ************************************************ 2025-10-08 15:48:01.394976 | orchestrator | Wednesday 08 October 2025 15:44:59 +0000 (0:00:00.338) 0:03:34.193 ***** 2025-10-08 15:48:01.395003 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-08 15:48:01.395014 | orchestrator | 2025-10-08 15:48:01.395035 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2025-10-08 15:48:01.395046 | orchestrator | Wednesday 08 October 2025 15:45:00 +0000 (0:00:01.312) 0:03:35.506 ***** 2025-10-08 15:48:01.395051 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-10-08 15:48:01.395072 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-10-08 15:48:01.395099 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-10-08 15:48:01.395112 | orchestrator | 2025-10-08 15:48:01.395117 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2025-10-08 15:48:01.395122 | orchestrator | Wednesday 08 October 2025 15:45:02 +0000 (0:00:01.352) 0:03:36.859 ***** 2025-10-08 15:48:01.395127 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-10-08 15:48:01.395146 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-10-08 15:48:01.395152 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:48:01.395157 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:48:01.395162 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-10-08 15:48:01.395170 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:48:01.395175 | orchestrator | 2025-10-08 15:48:01.395180 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2025-10-08 15:48:01.395185 | orchestrator | Wednesday 08 October 2025 15:45:02 +0000 (0:00:00.401) 0:03:37.261 ***** 2025-10-08 15:48:01.395190 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-10-08 15:48:01.395195 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:48:01.395211 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-10-08 15:48:01.395216 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:48:01.395222 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-10-08 15:48:01.395226 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:48:01.395231 | orchestrator | 2025-10-08 15:48:01.395236 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2025-10-08 15:48:01.395241 | orchestrator | Wednesday 08 October 2025 15:45:03 +0000 (0:00:00.861) 0:03:38.122 ***** 2025-10-08 15:48:01.395246 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:48:01.395251 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:48:01.395256 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:48:01.395261 | orchestrator | 2025-10-08 15:48:01.395265 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2025-10-08 15:48:01.395270 | orchestrator | Wednesday 08 October 2025 15:45:03 +0000 (0:00:00.446) 0:03:38.568 ***** 2025-10-08 15:48:01.395275 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:48:01.395280 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:48:01.395285 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:48:01.395290 | orchestrator | 2025-10-08 15:48:01.395295 | orchestrator | TASK [include_role : mistral] ************************************************** 2025-10-08 15:48:01.395300 | orchestrator | Wednesday 08 October 2025 15:45:05 +0000 (0:00:01.337) 0:03:39.906 ***** 2025-10-08 15:48:01.395304 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:48:01.395309 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:48:01.395314 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:48:01.395319 | orchestrator | 2025-10-08 15:48:01.395324 | orchestrator | TASK [include_role : neutron] ************************************************** 2025-10-08 15:48:01.395329 | orchestrator | Wednesday 08 October 2025 15:45:05 +0000 (0:00:00.337) 0:03:40.243 ***** 2025-10-08 15:48:01.395334 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-08 15:48:01.395339 | orchestrator | 2025-10-08 15:48:01.395343 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2025-10-08 15:48:01.395348 | orchestrator | Wednesday 08 October 2025 15:45:07 +0000 (0:00:01.458) 0:03:41.702 ***** 2025-10-08 15:48:01.395355 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-10-08 15:48:01.395365 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.395381 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.395387 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.395392 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-10-08 15:48:01.395397 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.395406 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-10-08 15:48:01.395412 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-10-08 15:48:01.395417 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.395425 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-10-08 15:48:01.395431 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.395452 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-10-08 15:48:01.395494 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-10-08 15:48:01.395512 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-10-08 15:48:01.395517 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.395559 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.395567 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-10-08 15:48:01.395572 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.395584 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-10-08 15:48:01.395589 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.395597 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-10-08 15:48:01.395603 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.395608 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-10-08 15:48:01.395613 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-10-08 15:48:01.395623 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.395628 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-10-08 15:48:01.395634 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.395643 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-10-08 15:48:01.395649 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-10-08 15:48:01.395654 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-10-08 15:48:01.395664 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.395669 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.395674 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.395683 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.395688 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-10-08 15:48:01.395696 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-10-08 15:48:01.395703 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-10-08 15:48:01.395725 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.395732 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-10-08 15:48:01.395754 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-10-08 15:48:01.395760 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.395769 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-10-08 15:48:01.395774 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.395781 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-10-08 15:48:01.395786 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-10-08 15:48:01.395792 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.395801 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-10-08 15:48:01.395809 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-10-08 15:48:01.395814 | orchestrator | 2025-10-08 15:48:01.395819 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2025-10-08 15:48:01.395825 | orchestrator | Wednesday 08 October 2025 15:45:11 +0000 (0:00:04.277) 0:03:45.979 ***** 2025-10-08 15:48:01.395834 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-10-08 15:48:01.395839 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.395854 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.395859 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.395867 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-10-08 15:48:01.395872 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.395880 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-10-08 15:48:01.395885 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-10-08 15:48:01.395891 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-10-08 15:48:01.395899 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.395927 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-10-08 15:48:01.395956 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.395980 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.396013 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.396030 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-10-08 15:48:01.396054 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-10-08 15:48:01.396061 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.396073 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.396081 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-10-08 15:48:01.396087 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-10-08 15:48:01.396096 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.396105 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.396110 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.396115 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.396194 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-10-08 15:48:01.396200 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-10-08 15:48:01.396216 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-10-08 15:48:01.396245 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-10-08 15:48:01.396251 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:48:01.396265 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-10-08 15:48:01.396270 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.396278 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.396283 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-10-08 15:48:01.396289 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-10-08 15:48:01.396303 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-10-08 15:48:01.396308 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.396313 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.396318 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-10-08 15:48:01.396326 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-10-08 15:48:01.396331 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-10-08 15:48:01.396343 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.396348 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.396353 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-10-08 15:48:01.396358 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-10-08 15:48:01.396365 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-10-08 15:48:01.396370 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-10-08 15:48:01.396380 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.396415 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:48:01.396423 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-10-08 15:48:01.396427 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-10-08 15:48:01.396432 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:48:01.396451 | orchestrator | 2025-10-08 15:48:01.396457 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2025-10-08 15:48:01.396468 | orchestrator | Wednesday 08 October 2025 15:45:12 +0000 (0:00:01.462) 0:03:47.442 ***** 2025-10-08 15:48:01.396473 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-10-08 15:48:01.396478 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-10-08 15:48:01.396483 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:48:01.396490 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-10-08 15:48:01.396495 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-10-08 15:48:01.396500 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:48:01.396505 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-10-08 15:48:01.396509 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-10-08 15:48:01.396517 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:48:01.396522 | orchestrator | 2025-10-08 15:48:01.396527 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2025-10-08 15:48:01.396531 | orchestrator | Wednesday 08 October 2025 15:45:14 +0000 (0:00:02.011) 0:03:49.453 ***** 2025-10-08 15:48:01.396536 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:48:01.396541 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:48:01.396545 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:48:01.396550 | orchestrator | 2025-10-08 15:48:01.396554 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2025-10-08 15:48:01.396559 | orchestrator | Wednesday 08 October 2025 15:45:16 +0000 (0:00:01.382) 0:03:50.836 ***** 2025-10-08 15:48:01.396564 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:48:01.396568 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:48:01.396573 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:48:01.396595 | orchestrator | 2025-10-08 15:48:01.396606 | orchestrator | TASK [include_role : placement] ************************************************ 2025-10-08 15:48:01.396611 | orchestrator | Wednesday 08 October 2025 15:45:18 +0000 (0:00:02.211) 0:03:53.047 ***** 2025-10-08 15:48:01.396615 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-08 15:48:01.396620 | orchestrator | 2025-10-08 15:48:01.396625 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2025-10-08 15:48:01.396629 | orchestrator | Wednesday 08 October 2025 15:45:19 +0000 (0:00:01.221) 0:03:54.268 ***** 2025-10-08 15:48:01.396665 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-10-08 15:48:01.396672 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-10-08 15:48:01.396680 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-10-08 15:48:01.396688 | orchestrator | 2025-10-08 15:48:01.396693 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2025-10-08 15:48:01.396698 | orchestrator | Wednesday 08 October 2025 15:45:23 +0000 (0:00:03.941) 0:03:58.210 ***** 2025-10-08 15:48:01.396702 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-10-08 15:48:01.396707 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:48:01.396722 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-10-08 15:48:01.396728 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:48:01.396733 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-10-08 15:48:01.396737 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:48:01.396742 | orchestrator | 2025-10-08 15:48:01.396747 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2025-10-08 15:48:01.396751 | orchestrator | Wednesday 08 October 2025 15:45:24 +0000 (0:00:00.571) 0:03:58.782 ***** 2025-10-08 15:48:01.396756 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-10-08 15:48:01.396764 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-10-08 15:48:01.396769 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:48:01.396776 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-10-08 15:48:01.396781 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-10-08 15:48:01.396786 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:48:01.396790 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-10-08 15:48:01.396795 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-10-08 15:48:01.396800 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:48:01.396805 | orchestrator | 2025-10-08 15:48:01.396809 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2025-10-08 15:48:01.396814 | orchestrator | Wednesday 08 October 2025 15:45:24 +0000 (0:00:00.774) 0:03:59.556 ***** 2025-10-08 15:48:01.396818 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:48:01.396823 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:48:01.396828 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:48:01.396832 | orchestrator | 2025-10-08 15:48:01.396837 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2025-10-08 15:48:01.396842 | orchestrator | Wednesday 08 October 2025 15:45:26 +0000 (0:00:01.928) 0:04:01.485 ***** 2025-10-08 15:48:01.396846 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:48:01.396851 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:48:01.396855 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:48:01.396860 | orchestrator | 2025-10-08 15:48:01.396865 | orchestrator | TASK [include_role : nova] ***************************************************** 2025-10-08 15:48:01.396869 | orchestrator | Wednesday 08 October 2025 15:45:28 +0000 (0:00:01.809) 0:04:03.294 ***** 2025-10-08 15:48:01.396874 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-08 15:48:01.396878 | orchestrator | 2025-10-08 15:48:01.396883 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2025-10-08 15:48:01.396897 | orchestrator | Wednesday 08 October 2025 15:45:30 +0000 (0:00:01.538) 0:04:04.832 ***** 2025-10-08 15:48:01.396902 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-10-08 15:48:01.396911 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.396918 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.396924 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-10-08 15:48:01.396933 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.396938 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.396943 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-10-08 15:48:01.396953 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.396958 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.396963 | orchestrator | 2025-10-08 15:48:01.396967 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2025-10-08 15:48:01.396972 | orchestrator | Wednesday 08 October 2025 15:45:34 +0000 (0:00:04.342) 0:04:09.175 ***** 2025-10-08 15:48:01.396987 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-10-08 15:48:01.396993 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.397002 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.397006 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:48:01.397014 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-10-08 15:48:01.397019 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.397027 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.397032 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:48:01.397037 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-10-08 15:48:01.397046 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.397054 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.397059 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:48:01.397063 | orchestrator | 2025-10-08 15:48:01.397068 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2025-10-08 15:48:01.397073 | orchestrator | Wednesday 08 October 2025 15:45:35 +0000 (0:00:01.237) 0:04:10.412 ***** 2025-10-08 15:48:01.397078 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-10-08 15:48:01.397083 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-10-08 15:48:01.397088 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-10-08 15:48:01.397092 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-10-08 15:48:01.397097 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:48:01.397102 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-10-08 15:48:01.397116 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-10-08 15:48:01.397121 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-10-08 15:48:01.397143 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-10-08 15:48:01.397148 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:48:01.397153 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-10-08 15:48:01.397157 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-10-08 15:48:01.397162 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-10-08 15:48:01.397167 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-10-08 15:48:01.397171 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:48:01.397176 | orchestrator | 2025-10-08 15:48:01.397181 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2025-10-08 15:48:01.397185 | orchestrator | Wednesday 08 October 2025 15:45:36 +0000 (0:00:00.930) 0:04:11.343 ***** 2025-10-08 15:48:01.397190 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:48:01.397195 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:48:01.397199 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:48:01.397204 | orchestrator | 2025-10-08 15:48:01.397209 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2025-10-08 15:48:01.397213 | orchestrator | Wednesday 08 October 2025 15:45:38 +0000 (0:00:01.347) 0:04:12.690 ***** 2025-10-08 15:48:01.397218 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:48:01.397222 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:48:01.397227 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:48:01.397232 | orchestrator | 2025-10-08 15:48:01.397236 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2025-10-08 15:48:01.397241 | orchestrator | Wednesday 08 October 2025 15:45:40 +0000 (0:00:02.226) 0:04:14.917 ***** 2025-10-08 15:48:01.397246 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-08 15:48:01.397250 | orchestrator | 2025-10-08 15:48:01.397255 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2025-10-08 15:48:01.397259 | orchestrator | Wednesday 08 October 2025 15:45:41 +0000 (0:00:01.558) 0:04:16.475 ***** 2025-10-08 15:48:01.397267 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2025-10-08 15:48:01.397272 | orchestrator | 2025-10-08 15:48:01.397276 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2025-10-08 15:48:01.397281 | orchestrator | Wednesday 08 October 2025 15:45:42 +0000 (0:00:00.848) 0:04:17.324 ***** 2025-10-08 15:48:01.397286 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-10-08 15:48:01.397290 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-10-08 15:48:01.397308 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-10-08 15:48:01.397314 | orchestrator | 2025-10-08 15:48:01.397318 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2025-10-08 15:48:01.397323 | orchestrator | Wednesday 08 October 2025 15:45:47 +0000 (0:00:04.465) 0:04:21.790 ***** 2025-10-08 15:48:01.397328 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-10-08 15:48:01.397333 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:48:01.397338 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-10-08 15:48:01.397342 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:48:01.397347 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-10-08 15:48:01.397352 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:48:01.397357 | orchestrator | 2025-10-08 15:48:01.397361 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2025-10-08 15:48:01.397366 | orchestrator | Wednesday 08 October 2025 15:45:48 +0000 (0:00:01.050) 0:04:22.840 ***** 2025-10-08 15:48:01.397371 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-10-08 15:48:01.397378 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-10-08 15:48:01.397383 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:48:01.397388 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-10-08 15:48:01.397397 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-10-08 15:48:01.397402 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:48:01.397406 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-10-08 15:48:01.397411 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-10-08 15:48:01.397416 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:48:01.397421 | orchestrator | 2025-10-08 15:48:01.397425 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-10-08 15:48:01.397430 | orchestrator | Wednesday 08 October 2025 15:45:49 +0000 (0:00:01.533) 0:04:24.373 ***** 2025-10-08 15:48:01.397435 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:48:01.397439 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:48:01.397444 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:48:01.397448 | orchestrator | 2025-10-08 15:48:01.397453 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-10-08 15:48:01.397466 | orchestrator | Wednesday 08 October 2025 15:45:52 +0000 (0:00:02.447) 0:04:26.820 ***** 2025-10-08 15:48:01.397471 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:48:01.397476 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:48:01.397480 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:48:01.397485 | orchestrator | 2025-10-08 15:48:01.397490 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2025-10-08 15:48:01.397494 | orchestrator | Wednesday 08 October 2025 15:45:55 +0000 (0:00:03.121) 0:04:29.942 ***** 2025-10-08 15:48:01.397499 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2025-10-08 15:48:01.397504 | orchestrator | 2025-10-08 15:48:01.397508 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2025-10-08 15:48:01.397513 | orchestrator | Wednesday 08 October 2025 15:45:56 +0000 (0:00:01.437) 0:04:31.380 ***** 2025-10-08 15:48:01.397518 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-10-08 15:48:01.397522 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:48:01.397527 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-10-08 15:48:01.397532 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:48:01.397537 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-10-08 15:48:01.397545 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:48:01.397550 | orchestrator | 2025-10-08 15:48:01.397557 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2025-10-08 15:48:01.397562 | orchestrator | Wednesday 08 October 2025 15:45:58 +0000 (0:00:01.281) 0:04:32.661 ***** 2025-10-08 15:48:01.397566 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-10-08 15:48:01.397571 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:48:01.397576 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-10-08 15:48:01.397581 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:48:01.397594 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-10-08 15:48:01.397599 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:48:01.397603 | orchestrator | 2025-10-08 15:48:01.397608 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2025-10-08 15:48:01.397613 | orchestrator | Wednesday 08 October 2025 15:45:59 +0000 (0:00:01.332) 0:04:33.993 ***** 2025-10-08 15:48:01.397617 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:48:01.397622 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:48:01.397626 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:48:01.397631 | orchestrator | 2025-10-08 15:48:01.397636 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-10-08 15:48:01.397640 | orchestrator | Wednesday 08 October 2025 15:46:01 +0000 (0:00:01.844) 0:04:35.838 ***** 2025-10-08 15:48:01.397645 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:48:01.397650 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:48:01.397654 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:48:01.397659 | orchestrator | 2025-10-08 15:48:01.397663 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-10-08 15:48:01.397668 | orchestrator | Wednesday 08 October 2025 15:46:03 +0000 (0:00:02.441) 0:04:38.279 ***** 2025-10-08 15:48:01.397673 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:48:01.397677 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:48:01.397682 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:48:01.397686 | orchestrator | 2025-10-08 15:48:01.397691 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2025-10-08 15:48:01.397700 | orchestrator | Wednesday 08 October 2025 15:46:06 +0000 (0:00:03.191) 0:04:41.470 ***** 2025-10-08 15:48:01.397704 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2025-10-08 15:48:01.397709 | orchestrator | 2025-10-08 15:48:01.397713 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2025-10-08 15:48:01.397718 | orchestrator | Wednesday 08 October 2025 15:46:07 +0000 (0:00:00.856) 0:04:42.326 ***** 2025-10-08 15:48:01.397723 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-10-08 15:48:01.397727 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:48:01.397735 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-10-08 15:48:01.397740 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:48:01.397745 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-10-08 15:48:01.397749 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:48:01.397754 | orchestrator | 2025-10-08 15:48:01.397759 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2025-10-08 15:48:01.397763 | orchestrator | Wednesday 08 October 2025 15:46:09 +0000 (0:00:01.311) 0:04:43.637 ***** 2025-10-08 15:48:01.397768 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-10-08 15:48:01.397773 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:48:01.397786 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-10-08 15:48:01.397791 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:48:01.397796 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-10-08 15:48:01.397806 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:48:01.397810 | orchestrator | 2025-10-08 15:48:01.397815 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2025-10-08 15:48:01.397820 | orchestrator | Wednesday 08 October 2025 15:46:10 +0000 (0:00:01.417) 0:04:45.055 ***** 2025-10-08 15:48:01.397824 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:48:01.397829 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:48:01.397833 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:48:01.397838 | orchestrator | 2025-10-08 15:48:01.397842 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-10-08 15:48:01.397847 | orchestrator | Wednesday 08 October 2025 15:46:12 +0000 (0:00:01.582) 0:04:46.638 ***** 2025-10-08 15:48:01.397852 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:48:01.397856 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:48:01.397861 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:48:01.397865 | orchestrator | 2025-10-08 15:48:01.397870 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-10-08 15:48:01.397875 | orchestrator | Wednesday 08 October 2025 15:46:14 +0000 (0:00:02.449) 0:04:49.087 ***** 2025-10-08 15:48:01.397879 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:48:01.397884 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:48:01.397888 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:48:01.397893 | orchestrator | 2025-10-08 15:48:01.397897 | orchestrator | TASK [include_role : octavia] ************************************************** 2025-10-08 15:48:01.397902 | orchestrator | Wednesday 08 October 2025 15:46:17 +0000 (0:00:03.350) 0:04:52.437 ***** 2025-10-08 15:48:01.397907 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-08 15:48:01.397911 | orchestrator | 2025-10-08 15:48:01.397916 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2025-10-08 15:48:01.397920 | orchestrator | Wednesday 08 October 2025 15:46:19 +0000 (0:00:01.574) 0:04:54.011 ***** 2025-10-08 15:48:01.397928 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-10-08 15:48:01.397933 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-10-08 15:48:01.397947 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-10-08 15:48:01.397956 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-10-08 15:48:01.397961 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-10-08 15:48:01.397966 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.397973 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-10-08 15:48:01.397978 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-10-08 15:48:01.397983 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-10-08 15:48:01.398000 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.398005 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-10-08 15:48:01.398010 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-10-08 15:48:01.398035 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-10-08 15:48:01.398040 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-10-08 15:48:01.398056 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.398067 | orchestrator | 2025-10-08 15:48:01.398072 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2025-10-08 15:48:01.398076 | orchestrator | Wednesday 08 October 2025 15:46:22 +0000 (0:00:03.447) 0:04:57.459 ***** 2025-10-08 15:48:01.398091 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-10-08 15:48:01.398097 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-10-08 15:48:01.398102 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-10-08 15:48:01.398109 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-10-08 15:48:01.398114 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.398119 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:48:01.398128 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-10-08 15:48:01.398173 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-10-08 15:48:01.398178 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-10-08 15:48:01.398183 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-10-08 15:48:01.398188 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.398193 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:48:01.398200 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-10-08 15:48:01.398210 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-10-08 15:48:01.398224 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-10-08 15:48:01.398229 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-10-08 15:48:01.398234 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-10-08 15:48:01.398239 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:48:01.398244 | orchestrator | 2025-10-08 15:48:01.398248 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2025-10-08 15:48:01.398253 | orchestrator | Wednesday 08 October 2025 15:46:23 +0000 (0:00:00.717) 0:04:58.177 ***** 2025-10-08 15:48:01.398258 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-10-08 15:48:01.398262 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-10-08 15:48:01.398267 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:48:01.398274 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-10-08 15:48:01.398279 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-10-08 15:48:01.398288 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:48:01.398293 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-10-08 15:48:01.398298 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-10-08 15:48:01.398302 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:48:01.398307 | orchestrator | 2025-10-08 15:48:01.398312 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2025-10-08 15:48:01.398316 | orchestrator | Wednesday 08 October 2025 15:46:25 +0000 (0:00:01.550) 0:04:59.728 ***** 2025-10-08 15:48:01.398321 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:48:01.398326 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:48:01.398330 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:48:01.398335 | orchestrator | 2025-10-08 15:48:01.398339 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2025-10-08 15:48:01.398344 | orchestrator | Wednesday 08 October 2025 15:46:26 +0000 (0:00:01.475) 0:05:01.203 ***** 2025-10-08 15:48:01.398348 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:48:01.398353 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:48:01.398358 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:48:01.398362 | orchestrator | 2025-10-08 15:48:01.398367 | orchestrator | TASK [include_role : opensearch] *********************************************** 2025-10-08 15:48:01.398371 | orchestrator | Wednesday 08 October 2025 15:46:28 +0000 (0:00:02.152) 0:05:03.355 ***** 2025-10-08 15:48:01.398376 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-08 15:48:01.398381 | orchestrator | 2025-10-08 15:48:01.398385 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2025-10-08 15:48:01.398390 | orchestrator | Wednesday 08 October 2025 15:46:30 +0000 (0:00:01.335) 0:05:04.691 ***** 2025-10-08 15:48:01.398404 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-10-08 15:48:01.398410 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-10-08 15:48:01.398417 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-10-08 15:48:01.398425 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-10-08 15:48:01.398440 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-10-08 15:48:01.398446 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-10-08 15:48:01.398450 | orchestrator | 2025-10-08 15:48:01.398455 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2025-10-08 15:48:01.398462 | orchestrator | Wednesday 08 October 2025 15:46:36 +0000 (0:00:06.044) 0:05:10.735 ***** 2025-10-08 15:48:01.398470 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-10-08 15:48:01.398475 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-10-08 15:48:01.398479 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:48:01.398488 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-10-08 15:48:01.398492 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-10-08 15:48:01.398501 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:48:01.398505 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-10-08 15:48:01.398512 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-10-08 15:48:01.398517 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:48:01.398521 | orchestrator | 2025-10-08 15:48:01.398525 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2025-10-08 15:48:01.398530 | orchestrator | Wednesday 08 October 2025 15:46:36 +0000 (0:00:00.672) 0:05:11.408 ***** 2025-10-08 15:48:01.398534 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-10-08 15:48:01.398538 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-10-08 15:48:01.398551 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-10-08 15:48:01.398556 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-10-08 15:48:01.398561 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:48:01.398565 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-10-08 15:48:01.398569 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-10-08 15:48:01.398573 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:48:01.398578 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-10-08 15:48:01.398586 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-10-08 15:48:01.398590 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-10-08 15:48:01.398594 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:48:01.398598 | orchestrator | 2025-10-08 15:48:01.398603 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2025-10-08 15:48:01.398607 | orchestrator | Wednesday 08 October 2025 15:46:37 +0000 (0:00:00.956) 0:05:12.364 ***** 2025-10-08 15:48:01.398611 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:48:01.398615 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:48:01.398619 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:48:01.398623 | orchestrator | 2025-10-08 15:48:01.398628 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2025-10-08 15:48:01.398632 | orchestrator | Wednesday 08 October 2025 15:46:38 +0000 (0:00:01.054) 0:05:13.418 ***** 2025-10-08 15:48:01.398636 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:48:01.398640 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:48:01.398644 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:48:01.398648 | orchestrator | 2025-10-08 15:48:01.398653 | orchestrator | TASK [include_role : prometheus] *********************************************** 2025-10-08 15:48:01.398660 | orchestrator | Wednesday 08 October 2025 15:46:40 +0000 (0:00:01.436) 0:05:14.855 ***** 2025-10-08 15:48:01.398664 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-08 15:48:01.398668 | orchestrator | 2025-10-08 15:48:01.398672 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2025-10-08 15:48:01.398676 | orchestrator | Wednesday 08 October 2025 15:46:41 +0000 (0:00:01.432) 0:05:16.287 ***** 2025-10-08 15:48:01.398681 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-10-08 15:48:01.398685 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-10-08 15:48:01.398698 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-08 15:48:01.398707 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-08 15:48:01.398712 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-10-08 15:48:01.398717 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-10-08 15:48:01.398723 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-10-08 15:48:01.398728 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-08 15:48:01.398732 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-08 15:48:01.398745 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-10-08 15:48:01.398753 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-10-08 15:48:01.398757 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-10-08 15:48:01.398762 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-08 15:48:01.398768 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-08 15:48:01.398773 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-10-08 15:48:01.398777 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-10-08 15:48:01.398785 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-10-08 15:48:01.398793 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-08 15:48:01.398797 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-08 15:48:01.398802 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-10-08 15:48:01.398809 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-10-08 15:48:01.398813 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-10-08 15:48:01.398826 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-10-08 15:48:01.398831 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-08 15:48:01.398835 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-10-08 15:48:01.398842 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-08 15:48:01.398846 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-08 15:48:01.398851 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-10-08 15:48:01.398861 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-08 15:48:01.398865 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-10-08 15:48:01.398870 | orchestrator | 2025-10-08 15:48:01.398874 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2025-10-08 15:48:01.398878 | orchestrator | Wednesday 08 October 2025 15:46:46 +0000 (0:00:04.581) 0:05:20.869 ***** 2025-10-08 15:48:01.398883 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-10-08 15:48:01.398887 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-10-08 15:48:01.398894 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-08 15:48:01.398899 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-08 15:48:01.398903 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-10-08 15:48:01.398914 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-10-08 15:48:01.398919 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-10-08 15:48:01.398923 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-08 15:48:01.398930 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-08 15:48:01.398934 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-10-08 15:48:01.398939 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:48:01.398943 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-10-08 15:48:01.398954 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-10-08 15:48:01.398959 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-08 15:48:01.398963 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-08 15:48:01.398968 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-10-08 15:48:01.398974 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-10-08 15:48:01.398979 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-10-08 15:48:01.398987 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-08 15:48:01.398994 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-08 15:48:01.398999 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-10-08 15:48:01.399003 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:48:01.399007 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-10-08 15:48:01.399012 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-10-08 15:48:01.399020 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-08 15:48:01.399028 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-08 15:48:01.399033 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-10-08 15:48:01.399040 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-10-08 15:48:01.399045 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-10-08 15:48:01.399052 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-08 15:48:01.399058 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-08 15:48:01.399069 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-10-08 15:48:01.399073 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:48:01.399077 | orchestrator | 2025-10-08 15:48:01.399082 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2025-10-08 15:48:01.399086 | orchestrator | Wednesday 08 October 2025 15:46:47 +0000 (0:00:01.407) 0:05:22.276 ***** 2025-10-08 15:48:01.399090 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-10-08 15:48:01.399094 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-10-08 15:48:01.399099 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-10-08 15:48:01.399106 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-10-08 15:48:01.399110 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:48:01.399115 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-10-08 15:48:01.399119 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-10-08 15:48:01.399123 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-10-08 15:48:01.399128 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-10-08 15:48:01.399153 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:48:01.399157 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-10-08 15:48:01.399162 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-10-08 15:48:01.399166 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-10-08 15:48:01.399176 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-10-08 15:48:01.399180 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:48:01.399185 | orchestrator | 2025-10-08 15:48:01.399189 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2025-10-08 15:48:01.399193 | orchestrator | Wednesday 08 October 2025 15:46:48 +0000 (0:00:01.079) 0:05:23.355 ***** 2025-10-08 15:48:01.399198 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:48:01.399202 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:48:01.399206 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:48:01.399210 | orchestrator | 2025-10-08 15:48:01.399214 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2025-10-08 15:48:01.399218 | orchestrator | Wednesday 08 October 2025 15:46:49 +0000 (0:00:00.436) 0:05:23.792 ***** 2025-10-08 15:48:01.399223 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:48:01.399227 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:48:01.399231 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:48:01.399235 | orchestrator | 2025-10-08 15:48:01.399239 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2025-10-08 15:48:01.399243 | orchestrator | Wednesday 08 October 2025 15:46:50 +0000 (0:00:01.468) 0:05:25.260 ***** 2025-10-08 15:48:01.399247 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-08 15:48:01.399251 | orchestrator | 2025-10-08 15:48:01.399256 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2025-10-08 15:48:01.399260 | orchestrator | Wednesday 08 October 2025 15:46:52 +0000 (0:00:01.773) 0:05:27.034 ***** 2025-10-08 15:48:01.399267 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-10-08 15:48:01.399272 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-10-08 15:48:01.399277 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-10-08 15:48:01.399286 | orchestrator | 2025-10-08 15:48:01.399291 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2025-10-08 15:48:01.399297 | orchestrator | Wednesday 08 October 2025 15:46:54 +0000 (0:00:02.521) 0:05:29.555 ***** 2025-10-08 15:48:01.399302 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-10-08 15:48:01.399310 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-10-08 15:48:01.399314 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:48:01.399319 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:48:01.399323 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-10-08 15:48:01.399330 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:48:01.399335 | orchestrator | 2025-10-08 15:48:01.399339 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2025-10-08 15:48:01.399343 | orchestrator | Wednesday 08 October 2025 15:46:55 +0000 (0:00:00.437) 0:05:29.993 ***** 2025-10-08 15:48:01.399347 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-10-08 15:48:01.399351 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:48:01.399356 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-10-08 15:48:01.399360 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:48:01.399364 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-10-08 15:48:01.399368 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:48:01.399373 | orchestrator | 2025-10-08 15:48:01.399377 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2025-10-08 15:48:01.399381 | orchestrator | Wednesday 08 October 2025 15:46:56 +0000 (0:00:01.096) 0:05:31.090 ***** 2025-10-08 15:48:01.399385 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:48:01.399389 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:48:01.399396 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:48:01.399400 | orchestrator | 2025-10-08 15:48:01.399404 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2025-10-08 15:48:01.399409 | orchestrator | Wednesday 08 October 2025 15:46:56 +0000 (0:00:00.471) 0:05:31.562 ***** 2025-10-08 15:48:01.399413 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:48:01.399417 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:48:01.399421 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:48:01.399425 | orchestrator | 2025-10-08 15:48:01.399429 | orchestrator | TASK [include_role : skyline] ************************************************** 2025-10-08 15:48:01.399434 | orchestrator | Wednesday 08 October 2025 15:46:58 +0000 (0:00:01.337) 0:05:32.899 ***** 2025-10-08 15:48:01.399438 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-08 15:48:01.399442 | orchestrator | 2025-10-08 15:48:01.399446 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2025-10-08 15:48:01.399450 | orchestrator | Wednesday 08 October 2025 15:47:00 +0000 (0:00:01.722) 0:05:34.622 ***** 2025-10-08 15:48:01.399454 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-10-08 15:48:01.399462 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-10-08 15:48:01.399471 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-10-08 15:48:01.399478 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-10-08 15:48:01.399482 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-10-08 15:48:01.399490 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-10-08 15:48:01.399498 | orchestrator | 2025-10-08 15:48:01.399502 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2025-10-08 15:48:01.399506 | orchestrator | Wednesday 08 October 2025 15:47:06 +0000 (0:00:06.323) 0:05:40.946 ***** 2025-10-08 15:48:01.399511 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-10-08 15:48:01.399518 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-10-08 15:48:01.399522 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:48:01.399527 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-10-08 15:48:01.399534 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-10-08 15:48:01.399542 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:48:01.399546 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-10-08 15:48:01.399551 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-10-08 15:48:01.399555 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:48:01.399559 | orchestrator | 2025-10-08 15:48:01.399563 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2025-10-08 15:48:01.399568 | orchestrator | Wednesday 08 October 2025 15:47:06 +0000 (0:00:00.656) 0:05:41.602 ***** 2025-10-08 15:48:01.399575 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-10-08 15:48:01.399580 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-10-08 15:48:01.399584 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-10-08 15:48:01.399588 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-10-08 15:48:01.399593 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:48:01.399597 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-10-08 15:48:01.399601 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-10-08 15:48:01.399609 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-10-08 15:48:01.399615 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-10-08 15:48:01.399620 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:48:01.399624 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-10-08 15:48:01.399628 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-10-08 15:48:01.399633 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-10-08 15:48:01.399637 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-10-08 15:48:01.399641 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:48:01.399645 | orchestrator | 2025-10-08 15:48:01.399650 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2025-10-08 15:48:01.399654 | orchestrator | Wednesday 08 October 2025 15:47:08 +0000 (0:00:01.649) 0:05:43.251 ***** 2025-10-08 15:48:01.399658 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:48:01.399662 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:48:01.399666 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:48:01.399671 | orchestrator | 2025-10-08 15:48:01.399675 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2025-10-08 15:48:01.399679 | orchestrator | Wednesday 08 October 2025 15:47:10 +0000 (0:00:01.477) 0:05:44.729 ***** 2025-10-08 15:48:01.399683 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:48:01.399687 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:48:01.399691 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:48:01.399695 | orchestrator | 2025-10-08 15:48:01.399700 | orchestrator | TASK [include_role : swift] **************************************************** 2025-10-08 15:48:01.399704 | orchestrator | Wednesday 08 October 2025 15:47:12 +0000 (0:00:02.158) 0:05:46.888 ***** 2025-10-08 15:48:01.399708 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:48:01.399712 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:48:01.399716 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:48:01.399720 | orchestrator | 2025-10-08 15:48:01.399725 | orchestrator | TASK [include_role : tacker] *************************************************** 2025-10-08 15:48:01.399729 | orchestrator | Wednesday 08 October 2025 15:47:12 +0000 (0:00:00.324) 0:05:47.213 ***** 2025-10-08 15:48:01.399733 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:48:01.399737 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:48:01.399741 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:48:01.399745 | orchestrator | 2025-10-08 15:48:01.399750 | orchestrator | TASK [include_role : trove] **************************************************** 2025-10-08 15:48:01.399754 | orchestrator | Wednesday 08 October 2025 15:47:12 +0000 (0:00:00.334) 0:05:47.547 ***** 2025-10-08 15:48:01.399758 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:48:01.399764 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:48:01.399769 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:48:01.399773 | orchestrator | 2025-10-08 15:48:01.399781 | orchestrator | TASK [include_role : venus] **************************************************** 2025-10-08 15:48:01.399785 | orchestrator | Wednesday 08 October 2025 15:47:13 +0000 (0:00:00.673) 0:05:48.221 ***** 2025-10-08 15:48:01.399789 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:48:01.399793 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:48:01.399797 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:48:01.399801 | orchestrator | 2025-10-08 15:48:01.399806 | orchestrator | TASK [include_role : watcher] ************************************************** 2025-10-08 15:48:01.399810 | orchestrator | Wednesday 08 October 2025 15:47:13 +0000 (0:00:00.365) 0:05:48.586 ***** 2025-10-08 15:48:01.399814 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:48:01.399818 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:48:01.399822 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:48:01.399826 | orchestrator | 2025-10-08 15:48:01.399830 | orchestrator | TASK [include_role : zun] ****************************************************** 2025-10-08 15:48:01.399835 | orchestrator | Wednesday 08 October 2025 15:47:14 +0000 (0:00:00.313) 0:05:48.899 ***** 2025-10-08 15:48:01.399839 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:48:01.399843 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:48:01.399847 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:48:01.399851 | orchestrator | 2025-10-08 15:48:01.399855 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2025-10-08 15:48:01.399860 | orchestrator | Wednesday 08 October 2025 15:47:15 +0000 (0:00:00.849) 0:05:49.749 ***** 2025-10-08 15:48:01.399864 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:48:01.399868 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:48:01.399872 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:48:01.399876 | orchestrator | 2025-10-08 15:48:01.399880 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2025-10-08 15:48:01.399885 | orchestrator | Wednesday 08 October 2025 15:47:15 +0000 (0:00:00.857) 0:05:50.606 ***** 2025-10-08 15:48:01.399889 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:48:01.399893 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:48:01.399897 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:48:01.399901 | orchestrator | 2025-10-08 15:48:01.399905 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2025-10-08 15:48:01.399909 | orchestrator | Wednesday 08 October 2025 15:47:16 +0000 (0:00:00.398) 0:05:51.005 ***** 2025-10-08 15:48:01.399913 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:48:01.399918 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:48:01.399922 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:48:01.399926 | orchestrator | 2025-10-08 15:48:01.399933 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2025-10-08 15:48:01.399937 | orchestrator | Wednesday 08 October 2025 15:47:17 +0000 (0:00:00.928) 0:05:51.933 ***** 2025-10-08 15:48:01.399941 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:48:01.399945 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:48:01.399949 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:48:01.399954 | orchestrator | 2025-10-08 15:48:01.399958 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2025-10-08 15:48:01.399962 | orchestrator | Wednesday 08 October 2025 15:47:18 +0000 (0:00:01.246) 0:05:53.180 ***** 2025-10-08 15:48:01.399966 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:48:01.399970 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:48:01.399974 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:48:01.399979 | orchestrator | 2025-10-08 15:48:01.399983 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2025-10-08 15:48:01.399987 | orchestrator | Wednesday 08 October 2025 15:47:19 +0000 (0:00:00.882) 0:05:54.063 ***** 2025-10-08 15:48:01.399991 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:48:01.399995 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:48:01.400000 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:48:01.400004 | orchestrator | 2025-10-08 15:48:01.400008 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2025-10-08 15:48:01.400012 | orchestrator | Wednesday 08 October 2025 15:47:29 +0000 (0:00:10.008) 0:06:04.071 ***** 2025-10-08 15:48:01.400019 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:48:01.400024 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:48:01.400028 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:48:01.400032 | orchestrator | 2025-10-08 15:48:01.400036 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2025-10-08 15:48:01.400040 | orchestrator | Wednesday 08 October 2025 15:47:30 +0000 (0:00:00.751) 0:06:04.822 ***** 2025-10-08 15:48:01.400044 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:48:01.400048 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:48:01.400053 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:48:01.400057 | orchestrator | 2025-10-08 15:48:01.400061 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2025-10-08 15:48:01.400065 | orchestrator | Wednesday 08 October 2025 15:47:43 +0000 (0:00:12.983) 0:06:17.805 ***** 2025-10-08 15:48:01.400069 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:48:01.400073 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:48:01.400077 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:48:01.400082 | orchestrator | 2025-10-08 15:48:01.400086 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2025-10-08 15:48:01.400090 | orchestrator | Wednesday 08 October 2025 15:47:44 +0000 (0:00:01.138) 0:06:18.944 ***** 2025-10-08 15:48:01.400094 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:48:01.400098 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:48:01.400102 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:48:01.400107 | orchestrator | 2025-10-08 15:48:01.400111 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2025-10-08 15:48:01.400115 | orchestrator | Wednesday 08 October 2025 15:47:53 +0000 (0:00:09.641) 0:06:28.585 ***** 2025-10-08 15:48:01.400119 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:48:01.400123 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:48:01.400128 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:48:01.400141 | orchestrator | 2025-10-08 15:48:01.400145 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2025-10-08 15:48:01.400149 | orchestrator | Wednesday 08 October 2025 15:47:54 +0000 (0:00:00.344) 0:06:28.929 ***** 2025-10-08 15:48:01.400154 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:48:01.400158 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:48:01.400162 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:48:01.400166 | orchestrator | 2025-10-08 15:48:01.400172 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2025-10-08 15:48:01.400177 | orchestrator | Wednesday 08 October 2025 15:47:54 +0000 (0:00:00.362) 0:06:29.292 ***** 2025-10-08 15:48:01.400181 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:48:01.400185 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:48:01.400189 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:48:01.400194 | orchestrator | 2025-10-08 15:48:01.400198 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2025-10-08 15:48:01.400202 | orchestrator | Wednesday 08 October 2025 15:47:55 +0000 (0:00:00.681) 0:06:29.974 ***** 2025-10-08 15:48:01.400206 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:48:01.400210 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:48:01.400215 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:48:01.400219 | orchestrator | 2025-10-08 15:48:01.400223 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2025-10-08 15:48:01.400227 | orchestrator | Wednesday 08 October 2025 15:47:55 +0000 (0:00:00.344) 0:06:30.318 ***** 2025-10-08 15:48:01.400231 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:48:01.400235 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:48:01.400240 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:48:01.400244 | orchestrator | 2025-10-08 15:48:01.400248 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2025-10-08 15:48:01.400252 | orchestrator | Wednesday 08 October 2025 15:47:56 +0000 (0:00:00.368) 0:06:30.686 ***** 2025-10-08 15:48:01.400259 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:48:01.400263 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:48:01.400267 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:48:01.400272 | orchestrator | 2025-10-08 15:48:01.400276 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2025-10-08 15:48:01.400280 | orchestrator | Wednesday 08 October 2025 15:47:56 +0000 (0:00:00.345) 0:06:31.032 ***** 2025-10-08 15:48:01.400284 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:48:01.400288 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:48:01.400293 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:48:01.400297 | orchestrator | 2025-10-08 15:48:01.400301 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2025-10-08 15:48:01.400305 | orchestrator | Wednesday 08 October 2025 15:47:57 +0000 (0:00:01.391) 0:06:32.424 ***** 2025-10-08 15:48:01.400309 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:48:01.400313 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:48:01.400318 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:48:01.400322 | orchestrator | 2025-10-08 15:48:01.400326 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-08 15:48:01.400332 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-10-08 15:48:01.400337 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-10-08 15:48:01.400341 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-10-08 15:48:01.400345 | orchestrator | 2025-10-08 15:48:01.400350 | orchestrator | 2025-10-08 15:48:01.400354 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-08 15:48:01.400358 | orchestrator | Wednesday 08 October 2025 15:47:58 +0000 (0:00:00.860) 0:06:33.284 ***** 2025-10-08 15:48:01.400362 | orchestrator | =============================================================================== 2025-10-08 15:48:01.400366 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 12.98s 2025-10-08 15:48:01.400371 | orchestrator | loadbalancer : Start backup haproxy container -------------------------- 10.01s 2025-10-08 15:48:01.400375 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 9.64s 2025-10-08 15:48:01.400379 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.32s 2025-10-08 15:48:01.400383 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 6.04s 2025-10-08 15:48:01.400387 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 5.43s 2025-10-08 15:48:01.400391 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 5.15s 2025-10-08 15:48:01.400395 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 5.11s 2025-10-08 15:48:01.400400 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 5.00s 2025-10-08 15:48:01.400404 | orchestrator | haproxy-config : Copying over horizon haproxy config -------------------- 4.61s 2025-10-08 15:48:01.400408 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.58s 2025-10-08 15:48:01.400412 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 4.55s 2025-10-08 15:48:01.400416 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 4.47s 2025-10-08 15:48:01.400420 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 4.34s 2025-10-08 15:48:01.400424 | orchestrator | loadbalancer : Copying checks for services which are enabled ------------ 4.31s 2025-10-08 15:48:01.400428 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.28s 2025-10-08 15:48:01.400433 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 4.18s 2025-10-08 15:48:01.400437 | orchestrator | haproxy-config : Copying over keystone haproxy config ------------------- 3.95s 2025-10-08 15:48:01.400444 | orchestrator | haproxy-config : Copying over placement haproxy config ------------------ 3.94s 2025-10-08 15:48:01.400448 | orchestrator | loadbalancer : Copying over keepalived.conf ----------------------------- 3.92s 2025-10-08 15:48:04.411648 | orchestrator | 2025-10-08 15:48:04 | INFO  | Task c525e092-38d1-48a7-9760-fda7f6b63900 is in state STARTED 2025-10-08 15:48:04.412437 | orchestrator | 2025-10-08 15:48:04 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:48:04.414351 | orchestrator | 2025-10-08 15:48:04 | INFO  | Task 1aac30e0-8d96-46d0-8b24-1d528accbfcc is in state STARTED 2025-10-08 15:48:04.414395 | orchestrator | 2025-10-08 15:48:04 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:48:07.454002 | orchestrator | 2025-10-08 15:48:07 | INFO  | Task c525e092-38d1-48a7-9760-fda7f6b63900 is in state STARTED 2025-10-08 15:48:07.455375 | orchestrator | 2025-10-08 15:48:07 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:48:07.455694 | orchestrator | 2025-10-08 15:48:07 | INFO  | Task 1aac30e0-8d96-46d0-8b24-1d528accbfcc is in state STARTED 2025-10-08 15:48:07.455957 | orchestrator | 2025-10-08 15:48:07 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:48:10.491259 | orchestrator | 2025-10-08 15:48:10 | INFO  | Task c525e092-38d1-48a7-9760-fda7f6b63900 is in state STARTED 2025-10-08 15:48:10.493391 | orchestrator | 2025-10-08 15:48:10 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:48:10.494248 | orchestrator | 2025-10-08 15:48:10 | INFO  | Task 1aac30e0-8d96-46d0-8b24-1d528accbfcc is in state STARTED 2025-10-08 15:48:10.494466 | orchestrator | 2025-10-08 15:48:10 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:48:13.532426 | orchestrator | 2025-10-08 15:48:13 | INFO  | Task c525e092-38d1-48a7-9760-fda7f6b63900 is in state STARTED 2025-10-08 15:48:13.534733 | orchestrator | 2025-10-08 15:48:13 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:48:13.537092 | orchestrator | 2025-10-08 15:48:13 | INFO  | Task 1aac30e0-8d96-46d0-8b24-1d528accbfcc is in state STARTED 2025-10-08 15:48:13.537619 | orchestrator | 2025-10-08 15:48:13 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:48:16.599128 | orchestrator | 2025-10-08 15:48:16 | INFO  | Task c525e092-38d1-48a7-9760-fda7f6b63900 is in state STARTED 2025-10-08 15:48:16.602707 | orchestrator | 2025-10-08 15:48:16 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:48:16.603944 | orchestrator | 2025-10-08 15:48:16 | INFO  | Task 1aac30e0-8d96-46d0-8b24-1d528accbfcc is in state STARTED 2025-10-08 15:48:16.603978 | orchestrator | 2025-10-08 15:48:16 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:48:19.643006 | orchestrator | 2025-10-08 15:48:19 | INFO  | Task c525e092-38d1-48a7-9760-fda7f6b63900 is in state STARTED 2025-10-08 15:48:19.643562 | orchestrator | 2025-10-08 15:48:19 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:48:19.644403 | orchestrator | 2025-10-08 15:48:19 | INFO  | Task 1aac30e0-8d96-46d0-8b24-1d528accbfcc is in state STARTED 2025-10-08 15:48:19.644576 | orchestrator | 2025-10-08 15:48:19 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:48:22.687774 | orchestrator | 2025-10-08 15:48:22 | INFO  | Task c525e092-38d1-48a7-9760-fda7f6b63900 is in state STARTED 2025-10-08 15:48:22.689041 | orchestrator | 2025-10-08 15:48:22 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:48:22.690686 | orchestrator | 2025-10-08 15:48:22 | INFO  | Task 1aac30e0-8d96-46d0-8b24-1d528accbfcc is in state STARTED 2025-10-08 15:48:22.691023 | orchestrator | 2025-10-08 15:48:22 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:48:25.724242 | orchestrator | 2025-10-08 15:48:25 | INFO  | Task c525e092-38d1-48a7-9760-fda7f6b63900 is in state STARTED 2025-10-08 15:48:25.724524 | orchestrator | 2025-10-08 15:48:25 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:48:25.725374 | orchestrator | 2025-10-08 15:48:25 | INFO  | Task 1aac30e0-8d96-46d0-8b24-1d528accbfcc is in state STARTED 2025-10-08 15:48:25.725392 | orchestrator | 2025-10-08 15:48:25 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:48:28.783049 | orchestrator | 2025-10-08 15:48:28 | INFO  | Task c525e092-38d1-48a7-9760-fda7f6b63900 is in state STARTED 2025-10-08 15:48:28.783509 | orchestrator | 2025-10-08 15:48:28 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:48:28.784298 | orchestrator | 2025-10-08 15:48:28 | INFO  | Task 1aac30e0-8d96-46d0-8b24-1d528accbfcc is in state STARTED 2025-10-08 15:48:28.784320 | orchestrator | 2025-10-08 15:48:28 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:48:31.825331 | orchestrator | 2025-10-08 15:48:31 | INFO  | Task c525e092-38d1-48a7-9760-fda7f6b63900 is in state STARTED 2025-10-08 15:48:31.826088 | orchestrator | 2025-10-08 15:48:31 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:48:31.827113 | orchestrator | 2025-10-08 15:48:31 | INFO  | Task 1aac30e0-8d96-46d0-8b24-1d528accbfcc is in state STARTED 2025-10-08 15:48:31.827323 | orchestrator | 2025-10-08 15:48:31 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:48:34.871618 | orchestrator | 2025-10-08 15:48:34 | INFO  | Task c525e092-38d1-48a7-9760-fda7f6b63900 is in state STARTED 2025-10-08 15:48:34.872235 | orchestrator | 2025-10-08 15:48:34 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:48:34.875468 | orchestrator | 2025-10-08 15:48:34 | INFO  | Task 1aac30e0-8d96-46d0-8b24-1d528accbfcc is in state STARTED 2025-10-08 15:48:34.875701 | orchestrator | 2025-10-08 15:48:34 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:48:37.921993 | orchestrator | 2025-10-08 15:48:37 | INFO  | Task c525e092-38d1-48a7-9760-fda7f6b63900 is in state STARTED 2025-10-08 15:48:37.923831 | orchestrator | 2025-10-08 15:48:37 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:48:37.927696 | orchestrator | 2025-10-08 15:48:37 | INFO  | Task 1aac30e0-8d96-46d0-8b24-1d528accbfcc is in state STARTED 2025-10-08 15:48:37.927717 | orchestrator | 2025-10-08 15:48:37 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:48:40.969257 | orchestrator | 2025-10-08 15:48:40 | INFO  | Task c525e092-38d1-48a7-9760-fda7f6b63900 is in state STARTED 2025-10-08 15:48:40.970702 | orchestrator | 2025-10-08 15:48:40 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:48:40.972447 | orchestrator | 2025-10-08 15:48:40 | INFO  | Task 1aac30e0-8d96-46d0-8b24-1d528accbfcc is in state STARTED 2025-10-08 15:48:40.972518 | orchestrator | 2025-10-08 15:48:40 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:48:44.018734 | orchestrator | 2025-10-08 15:48:44 | INFO  | Task c525e092-38d1-48a7-9760-fda7f6b63900 is in state STARTED 2025-10-08 15:48:44.018829 | orchestrator | 2025-10-08 15:48:44 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:48:44.020067 | orchestrator | 2025-10-08 15:48:44 | INFO  | Task 1aac30e0-8d96-46d0-8b24-1d528accbfcc is in state STARTED 2025-10-08 15:48:44.020088 | orchestrator | 2025-10-08 15:48:44 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:48:47.071682 | orchestrator | 2025-10-08 15:48:47 | INFO  | Task c525e092-38d1-48a7-9760-fda7f6b63900 is in state STARTED 2025-10-08 15:48:47.074962 | orchestrator | 2025-10-08 15:48:47 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:48:47.078071 | orchestrator | 2025-10-08 15:48:47 | INFO  | Task 1aac30e0-8d96-46d0-8b24-1d528accbfcc is in state STARTED 2025-10-08 15:48:47.078097 | orchestrator | 2025-10-08 15:48:47 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:48:50.128309 | orchestrator | 2025-10-08 15:48:50 | INFO  | Task c525e092-38d1-48a7-9760-fda7f6b63900 is in state STARTED 2025-10-08 15:48:50.128406 | orchestrator | 2025-10-08 15:48:50 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:48:50.130873 | orchestrator | 2025-10-08 15:48:50 | INFO  | Task 1aac30e0-8d96-46d0-8b24-1d528accbfcc is in state STARTED 2025-10-08 15:48:50.130896 | orchestrator | 2025-10-08 15:48:50 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:48:53.176007 | orchestrator | 2025-10-08 15:48:53 | INFO  | Task c525e092-38d1-48a7-9760-fda7f6b63900 is in state STARTED 2025-10-08 15:48:53.176904 | orchestrator | 2025-10-08 15:48:53 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:48:53.177837 | orchestrator | 2025-10-08 15:48:53 | INFO  | Task 1aac30e0-8d96-46d0-8b24-1d528accbfcc is in state STARTED 2025-10-08 15:48:53.178003 | orchestrator | 2025-10-08 15:48:53 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:48:56.221576 | orchestrator | 2025-10-08 15:48:56 | INFO  | Task c525e092-38d1-48a7-9760-fda7f6b63900 is in state STARTED 2025-10-08 15:48:56.224395 | orchestrator | 2025-10-08 15:48:56 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:48:56.226960 | orchestrator | 2025-10-08 15:48:56 | INFO  | Task 1aac30e0-8d96-46d0-8b24-1d528accbfcc is in state STARTED 2025-10-08 15:48:56.226981 | orchestrator | 2025-10-08 15:48:56 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:48:59.274808 | orchestrator | 2025-10-08 15:48:59 | INFO  | Task c525e092-38d1-48a7-9760-fda7f6b63900 is in state STARTED 2025-10-08 15:48:59.275806 | orchestrator | 2025-10-08 15:48:59 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:48:59.278535 | orchestrator | 2025-10-08 15:48:59 | INFO  | Task 1aac30e0-8d96-46d0-8b24-1d528accbfcc is in state STARTED 2025-10-08 15:48:59.279488 | orchestrator | 2025-10-08 15:48:59 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:49:02.326224 | orchestrator | 2025-10-08 15:49:02 | INFO  | Task c525e092-38d1-48a7-9760-fda7f6b63900 is in state STARTED 2025-10-08 15:49:02.328479 | orchestrator | 2025-10-08 15:49:02 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:49:02.330741 | orchestrator | 2025-10-08 15:49:02 | INFO  | Task 1aac30e0-8d96-46d0-8b24-1d528accbfcc is in state STARTED 2025-10-08 15:49:02.330773 | orchestrator | 2025-10-08 15:49:02 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:49:05.390991 | orchestrator | 2025-10-08 15:49:05 | INFO  | Task c525e092-38d1-48a7-9760-fda7f6b63900 is in state STARTED 2025-10-08 15:49:05.393316 | orchestrator | 2025-10-08 15:49:05 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:49:05.396002 | orchestrator | 2025-10-08 15:49:05 | INFO  | Task 1aac30e0-8d96-46d0-8b24-1d528accbfcc is in state STARTED 2025-10-08 15:49:05.396495 | orchestrator | 2025-10-08 15:49:05 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:49:08.450501 | orchestrator | 2025-10-08 15:49:08 | INFO  | Task c525e092-38d1-48a7-9760-fda7f6b63900 is in state STARTED 2025-10-08 15:49:08.453748 | orchestrator | 2025-10-08 15:49:08 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:49:08.456191 | orchestrator | 2025-10-08 15:49:08 | INFO  | Task 1aac30e0-8d96-46d0-8b24-1d528accbfcc is in state STARTED 2025-10-08 15:49:08.457877 | orchestrator | 2025-10-08 15:49:08 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:49:11.514344 | orchestrator | 2025-10-08 15:49:11 | INFO  | Task c525e092-38d1-48a7-9760-fda7f6b63900 is in state STARTED 2025-10-08 15:49:11.515121 | orchestrator | 2025-10-08 15:49:11 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:49:11.517375 | orchestrator | 2025-10-08 15:49:11 | INFO  | Task 1aac30e0-8d96-46d0-8b24-1d528accbfcc is in state STARTED 2025-10-08 15:49:11.517556 | orchestrator | 2025-10-08 15:49:11 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:49:14.567094 | orchestrator | 2025-10-08 15:49:14 | INFO  | Task c525e092-38d1-48a7-9760-fda7f6b63900 is in state STARTED 2025-10-08 15:49:14.568795 | orchestrator | 2025-10-08 15:49:14 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:49:14.570694 | orchestrator | 2025-10-08 15:49:14 | INFO  | Task 1aac30e0-8d96-46d0-8b24-1d528accbfcc is in state STARTED 2025-10-08 15:49:14.570967 | orchestrator | 2025-10-08 15:49:14 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:49:17.616012 | orchestrator | 2025-10-08 15:49:17 | INFO  | Task c525e092-38d1-48a7-9760-fda7f6b63900 is in state STARTED 2025-10-08 15:49:17.618238 | orchestrator | 2025-10-08 15:49:17 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:49:17.618809 | orchestrator | 2025-10-08 15:49:17 | INFO  | Task 1aac30e0-8d96-46d0-8b24-1d528accbfcc is in state STARTED 2025-10-08 15:49:17.618922 | orchestrator | 2025-10-08 15:49:17 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:49:20.672593 | orchestrator | 2025-10-08 15:49:20 | INFO  | Task c525e092-38d1-48a7-9760-fda7f6b63900 is in state STARTED 2025-10-08 15:49:20.674371 | orchestrator | 2025-10-08 15:49:20 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:49:20.676695 | orchestrator | 2025-10-08 15:49:20 | INFO  | Task 1aac30e0-8d96-46d0-8b24-1d528accbfcc is in state STARTED 2025-10-08 15:49:20.676956 | orchestrator | 2025-10-08 15:49:20 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:49:23.722290 | orchestrator | 2025-10-08 15:49:23 | INFO  | Task c525e092-38d1-48a7-9760-fda7f6b63900 is in state STARTED 2025-10-08 15:49:23.724382 | orchestrator | 2025-10-08 15:49:23 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:49:23.724747 | orchestrator | 2025-10-08 15:49:23 | INFO  | Task 1aac30e0-8d96-46d0-8b24-1d528accbfcc is in state STARTED 2025-10-08 15:49:23.724771 | orchestrator | 2025-10-08 15:49:23 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:49:26.765268 | orchestrator | 2025-10-08 15:49:26 | INFO  | Task c525e092-38d1-48a7-9760-fda7f6b63900 is in state STARTED 2025-10-08 15:49:26.766266 | orchestrator | 2025-10-08 15:49:26 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:49:26.767744 | orchestrator | 2025-10-08 15:49:26 | INFO  | Task 1aac30e0-8d96-46d0-8b24-1d528accbfcc is in state STARTED 2025-10-08 15:49:26.768006 | orchestrator | 2025-10-08 15:49:26 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:49:29.807538 | orchestrator | 2025-10-08 15:49:29 | INFO  | Task c525e092-38d1-48a7-9760-fda7f6b63900 is in state STARTED 2025-10-08 15:49:29.808615 | orchestrator | 2025-10-08 15:49:29 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:49:29.809800 | orchestrator | 2025-10-08 15:49:29 | INFO  | Task 1aac30e0-8d96-46d0-8b24-1d528accbfcc is in state STARTED 2025-10-08 15:49:29.810303 | orchestrator | 2025-10-08 15:49:29 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:49:32.865134 | orchestrator | 2025-10-08 15:49:32 | INFO  | Task c525e092-38d1-48a7-9760-fda7f6b63900 is in state STARTED 2025-10-08 15:49:32.866973 | orchestrator | 2025-10-08 15:49:32 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:49:32.868984 | orchestrator | 2025-10-08 15:49:32 | INFO  | Task 1aac30e0-8d96-46d0-8b24-1d528accbfcc is in state STARTED 2025-10-08 15:49:32.869364 | orchestrator | 2025-10-08 15:49:32 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:49:35.918348 | orchestrator | 2025-10-08 15:49:35 | INFO  | Task c525e092-38d1-48a7-9760-fda7f6b63900 is in state STARTED 2025-10-08 15:49:35.920471 | orchestrator | 2025-10-08 15:49:35 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:49:35.923011 | orchestrator | 2025-10-08 15:49:35 | INFO  | Task 1aac30e0-8d96-46d0-8b24-1d528accbfcc is in state STARTED 2025-10-08 15:49:35.923052 | orchestrator | 2025-10-08 15:49:35 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:49:38.975744 | orchestrator | 2025-10-08 15:49:38 | INFO  | Task c525e092-38d1-48a7-9760-fda7f6b63900 is in state STARTED 2025-10-08 15:49:38.976206 | orchestrator | 2025-10-08 15:49:38 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:49:38.977228 | orchestrator | 2025-10-08 15:49:38 | INFO  | Task 1aac30e0-8d96-46d0-8b24-1d528accbfcc is in state STARTED 2025-10-08 15:49:38.977255 | orchestrator | 2025-10-08 15:49:38 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:49:42.028216 | orchestrator | 2025-10-08 15:49:42 | INFO  | Task c525e092-38d1-48a7-9760-fda7f6b63900 is in state STARTED 2025-10-08 15:49:42.030339 | orchestrator | 2025-10-08 15:49:42 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:49:42.033045 | orchestrator | 2025-10-08 15:49:42 | INFO  | Task 1aac30e0-8d96-46d0-8b24-1d528accbfcc is in state STARTED 2025-10-08 15:49:42.033212 | orchestrator | 2025-10-08 15:49:42 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:49:45.081847 | orchestrator | 2025-10-08 15:49:45 | INFO  | Task c525e092-38d1-48a7-9760-fda7f6b63900 is in state STARTED 2025-10-08 15:49:45.082116 | orchestrator | 2025-10-08 15:49:45 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:49:45.083829 | orchestrator | 2025-10-08 15:49:45 | INFO  | Task 1aac30e0-8d96-46d0-8b24-1d528accbfcc is in state STARTED 2025-10-08 15:49:45.083856 | orchestrator | 2025-10-08 15:49:45 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:49:48.123920 | orchestrator | 2025-10-08 15:49:48 | INFO  | Task c525e092-38d1-48a7-9760-fda7f6b63900 is in state STARTED 2025-10-08 15:49:48.127704 | orchestrator | 2025-10-08 15:49:48 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state STARTED 2025-10-08 15:49:48.129810 | orchestrator | 2025-10-08 15:49:48 | INFO  | Task 1aac30e0-8d96-46d0-8b24-1d528accbfcc is in state STARTED 2025-10-08 15:49:48.129930 | orchestrator | 2025-10-08 15:49:48 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:49:51.183545 | orchestrator | 2025-10-08 15:49:51 | INFO  | Task c525e092-38d1-48a7-9760-fda7f6b63900 is in state STARTED 2025-10-08 15:49:51.190836 | orchestrator | 2025-10-08 15:49:51 | INFO  | Task 9f08555d-b079-4ffe-9021-ce13276ad6cb is in state SUCCESS 2025-10-08 15:49:51.193778 | orchestrator | 2025-10-08 15:49:51.193819 | orchestrator | 2025-10-08 15:49:51.193831 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2025-10-08 15:49:51.193843 | orchestrator | 2025-10-08 15:49:51.193854 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-10-08 15:49:51.193866 | orchestrator | Wednesday 08 October 2025 15:39:03 +0000 (0:00:00.775) 0:00:00.775 ***** 2025-10-08 15:49:51.194146 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-08 15:49:51.194231 | orchestrator | 2025-10-08 15:49:51.194243 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-10-08 15:49:51.194254 | orchestrator | Wednesday 08 October 2025 15:39:04 +0000 (0:00:01.150) 0:00:01.925 ***** 2025-10-08 15:49:51.194266 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:49:51.194278 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:49:51.194289 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:49:51.194300 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:49:51.194310 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:49:51.194320 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:49:51.194331 | orchestrator | 2025-10-08 15:49:51.194381 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-10-08 15:49:51.194394 | orchestrator | Wednesday 08 October 2025 15:39:06 +0000 (0:00:01.569) 0:00:03.495 ***** 2025-10-08 15:49:51.194407 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:49:51.194418 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:49:51.194430 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:49:51.194442 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:49:51.194454 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:49:51.194466 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:49:51.194477 | orchestrator | 2025-10-08 15:49:51.194489 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-10-08 15:49:51.194501 | orchestrator | Wednesday 08 October 2025 15:39:07 +0000 (0:00:00.772) 0:00:04.267 ***** 2025-10-08 15:49:51.194513 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:49:51.194526 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:49:51.194574 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:49:51.194587 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:49:51.194598 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:49:51.194610 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:49:51.194649 | orchestrator | 2025-10-08 15:49:51.194662 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-10-08 15:49:51.194675 | orchestrator | Wednesday 08 October 2025 15:39:08 +0000 (0:00:01.217) 0:00:05.485 ***** 2025-10-08 15:49:51.194718 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:49:51.194731 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:49:51.194743 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:49:51.194754 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:49:51.194765 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:49:51.194775 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:49:51.194786 | orchestrator | 2025-10-08 15:49:51.194797 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-10-08 15:49:51.194808 | orchestrator | Wednesday 08 October 2025 15:39:09 +0000 (0:00:00.671) 0:00:06.156 ***** 2025-10-08 15:49:51.194819 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:49:51.194829 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:49:51.194840 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:49:51.194851 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:49:51.194861 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:49:51.194872 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:49:51.194883 | orchestrator | 2025-10-08 15:49:51.194893 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-10-08 15:49:51.194904 | orchestrator | Wednesday 08 October 2025 15:39:09 +0000 (0:00:00.581) 0:00:06.737 ***** 2025-10-08 15:49:51.194915 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:49:51.194942 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:49:51.194952 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:49:51.194963 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:49:51.194974 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:49:51.194984 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:49:51.194995 | orchestrator | 2025-10-08 15:49:51.195006 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-10-08 15:49:51.195017 | orchestrator | Wednesday 08 October 2025 15:39:10 +0000 (0:00:00.898) 0:00:07.636 ***** 2025-10-08 15:49:51.195028 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:49:51.195040 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:49:51.195051 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:49:51.195062 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.195073 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:49:51.195083 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:49:51.195094 | orchestrator | 2025-10-08 15:49:51.195128 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-10-08 15:49:51.195140 | orchestrator | Wednesday 08 October 2025 15:39:11 +0000 (0:00:00.863) 0:00:08.500 ***** 2025-10-08 15:49:51.195172 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:49:51.195184 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:49:51.195194 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:49:51.195205 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:49:51.195216 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:49:51.195226 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:49:51.195237 | orchestrator | 2025-10-08 15:49:51.195247 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-10-08 15:49:51.195305 | orchestrator | Wednesday 08 October 2025 15:39:12 +0000 (0:00:00.910) 0:00:09.410 ***** 2025-10-08 15:49:51.195318 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-10-08 15:49:51.195329 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-10-08 15:49:51.195340 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-10-08 15:49:51.195351 | orchestrator | 2025-10-08 15:49:51.195362 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-10-08 15:49:51.195372 | orchestrator | Wednesday 08 October 2025 15:39:13 +0000 (0:00:00.724) 0:00:10.134 ***** 2025-10-08 15:49:51.195383 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:49:51.195393 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:49:51.195494 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:49:51.195506 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:49:51.195516 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:49:51.195551 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:49:51.195562 | orchestrator | 2025-10-08 15:49:51.195587 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-10-08 15:49:51.195599 | orchestrator | Wednesday 08 October 2025 15:39:14 +0000 (0:00:01.326) 0:00:11.461 ***** 2025-10-08 15:49:51.195610 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-10-08 15:49:51.195628 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-10-08 15:49:51.195639 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-10-08 15:49:51.195650 | orchestrator | 2025-10-08 15:49:51.195660 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-10-08 15:49:51.195671 | orchestrator | Wednesday 08 October 2025 15:39:17 +0000 (0:00:03.147) 0:00:14.609 ***** 2025-10-08 15:49:51.195682 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-10-08 15:49:51.195693 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-10-08 15:49:51.195703 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-10-08 15:49:51.195714 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:49:51.195725 | orchestrator | 2025-10-08 15:49:51.195736 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-10-08 15:49:51.195755 | orchestrator | Wednesday 08 October 2025 15:39:17 +0000 (0:00:00.368) 0:00:14.978 ***** 2025-10-08 15:49:51.195826 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-10-08 15:49:51.195869 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-10-08 15:49:51.195881 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-10-08 15:49:51.195918 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:49:51.195930 | orchestrator | 2025-10-08 15:49:51.195941 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-10-08 15:49:51.195979 | orchestrator | Wednesday 08 October 2025 15:39:19 +0000 (0:00:01.677) 0:00:16.655 ***** 2025-10-08 15:49:51.195993 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-10-08 15:49:51.196007 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-10-08 15:49:51.196018 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-10-08 15:49:51.196030 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:49:51.196041 | orchestrator | 2025-10-08 15:49:51.196052 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-10-08 15:49:51.196063 | orchestrator | Wednesday 08 October 2025 15:39:20 +0000 (0:00:00.523) 0:00:17.179 ***** 2025-10-08 15:49:51.196075 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-10-08 15:39:15.097826', 'end': '2025-10-08 15:39:15.406314', 'delta': '0:00:00.308488', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-10-08 15:49:51.196100 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-10-08 15:39:16.402924', 'end': '2025-10-08 15:39:16.653660', 'delta': '0:00:00.250736', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-10-08 15:49:51.196187 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-10-08 15:39:17.154641', 'end': '2025-10-08 15:39:17.476033', 'delta': '0:00:00.321392', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-10-08 15:49:51.196202 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:49:51.196242 | orchestrator | 2025-10-08 15:49:51.196307 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-10-08 15:49:51.196320 | orchestrator | Wednesday 08 October 2025 15:39:20 +0000 (0:00:00.599) 0:00:17.779 ***** 2025-10-08 15:49:51.196331 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:49:51.196342 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:49:51.196353 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:49:51.196363 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:49:51.196374 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:49:51.196385 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:49:51.196396 | orchestrator | 2025-10-08 15:49:51.196406 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-10-08 15:49:51.196417 | orchestrator | Wednesday 08 October 2025 15:39:22 +0000 (0:00:01.844) 0:00:19.624 ***** 2025-10-08 15:49:51.196428 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:49:51.196508 | orchestrator | 2025-10-08 15:49:51.196520 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-10-08 15:49:51.196530 | orchestrator | Wednesday 08 October 2025 15:39:23 +0000 (0:00:00.877) 0:00:20.502 ***** 2025-10-08 15:49:51.196541 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:49:51.196552 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:49:51.196563 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:49:51.196574 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:49:51.196585 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.196596 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:49:51.196607 | orchestrator | 2025-10-08 15:49:51.196618 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-10-08 15:49:51.196628 | orchestrator | Wednesday 08 October 2025 15:39:24 +0000 (0:00:01.478) 0:00:21.981 ***** 2025-10-08 15:49:51.196639 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:49:51.196650 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:49:51.196661 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:49:51.196672 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.196683 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:49:51.196693 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:49:51.196704 | orchestrator | 2025-10-08 15:49:51.196715 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-10-08 15:49:51.196726 | orchestrator | Wednesday 08 October 2025 15:39:26 +0000 (0:00:01.546) 0:00:23.528 ***** 2025-10-08 15:49:51.196737 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:49:51.196748 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:49:51.196758 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:49:51.196769 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.196780 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:49:51.196791 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:49:51.196826 | orchestrator | 2025-10-08 15:49:51.196847 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-10-08 15:49:51.196859 | orchestrator | Wednesday 08 October 2025 15:39:27 +0000 (0:00:01.162) 0:00:24.690 ***** 2025-10-08 15:49:51.196870 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:49:51.196881 | orchestrator | 2025-10-08 15:49:51.196891 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-10-08 15:49:51.196902 | orchestrator | Wednesday 08 October 2025 15:39:28 +0000 (0:00:00.329) 0:00:25.020 ***** 2025-10-08 15:49:51.196913 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:49:51.196924 | orchestrator | 2025-10-08 15:49:51.196935 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-10-08 15:49:51.196945 | orchestrator | Wednesday 08 October 2025 15:39:28 +0000 (0:00:00.337) 0:00:25.357 ***** 2025-10-08 15:49:51.196956 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:49:51.196967 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:49:51.196978 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:49:51.196988 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.196999 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:49:51.197010 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:49:51.197021 | orchestrator | 2025-10-08 15:49:51.197032 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-10-08 15:49:51.197078 | orchestrator | Wednesday 08 October 2025 15:39:28 +0000 (0:00:00.617) 0:00:25.974 ***** 2025-10-08 15:49:51.197091 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:49:51.197102 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:49:51.197113 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:49:51.197123 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.197134 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:49:51.197261 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:49:51.197273 | orchestrator | 2025-10-08 15:49:51.197284 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-10-08 15:49:51.197295 | orchestrator | Wednesday 08 October 2025 15:39:29 +0000 (0:00:00.862) 0:00:26.837 ***** 2025-10-08 15:49:51.197306 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:49:51.197317 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:49:51.197328 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:49:51.197338 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.197349 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:49:51.197360 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:49:51.197420 | orchestrator | 2025-10-08 15:49:51.197432 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-10-08 15:49:51.197444 | orchestrator | Wednesday 08 October 2025 15:39:30 +0000 (0:00:00.779) 0:00:27.616 ***** 2025-10-08 15:49:51.197454 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:49:51.197465 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:49:51.197476 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:49:51.197486 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.197497 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:49:51.197508 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:49:51.197518 | orchestrator | 2025-10-08 15:49:51.197529 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-10-08 15:49:51.197540 | orchestrator | Wednesday 08 October 2025 15:39:31 +0000 (0:00:01.135) 0:00:28.752 ***** 2025-10-08 15:49:51.197550 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:49:51.197561 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:49:51.197572 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:49:51.197582 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.197593 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:49:51.197604 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:49:51.197614 | orchestrator | 2025-10-08 15:49:51.197625 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-10-08 15:49:51.197636 | orchestrator | Wednesday 08 October 2025 15:39:32 +0000 (0:00:00.587) 0:00:29.339 ***** 2025-10-08 15:49:51.197659 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:49:51.197668 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:49:51.197678 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:49:51.197688 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.197697 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:49:51.197707 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:49:51.197716 | orchestrator | 2025-10-08 15:49:51.197726 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-10-08 15:49:51.197736 | orchestrator | Wednesday 08 October 2025 15:39:33 +0000 (0:00:00.782) 0:00:30.122 ***** 2025-10-08 15:49:51.197746 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:49:51.197755 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:49:51.197765 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:49:51.197774 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.197784 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:49:51.197793 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:49:51.197803 | orchestrator | 2025-10-08 15:49:51.197812 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-10-08 15:49:51.197822 | orchestrator | Wednesday 08 October 2025 15:39:33 +0000 (0:00:00.796) 0:00:30.918 ***** 2025-10-08 15:49:51.197832 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-08 15:49:51.197843 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-08 15:49:51.197854 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-08 15:49:51.197864 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-08 15:49:51.197888 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-08 15:49:51.197898 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-08 15:49:51.197908 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-08 15:49:51.197925 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-08 15:49:51.197939 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6baf8ee8-9e1e-44df-871f-0b875401fb68', 'scsi-SQEMU_QEMU_HARDDISK_6baf8ee8-9e1e-44df-871f-0b875401fb68'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6baf8ee8-9e1e-44df-871f-0b875401fb68-part1', 'scsi-SQEMU_QEMU_HARDDISK_6baf8ee8-9e1e-44df-871f-0b875401fb68-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6baf8ee8-9e1e-44df-871f-0b875401fb68-part14', 'scsi-SQEMU_QEMU_HARDDISK_6baf8ee8-9e1e-44df-871f-0b875401fb68-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6baf8ee8-9e1e-44df-871f-0b875401fb68-part15', 'scsi-SQEMU_QEMU_HARDDISK_6baf8ee8-9e1e-44df-871f-0b875401fb68-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6baf8ee8-9e1e-44df-871f-0b875401fb68-part16', 'scsi-SQEMU_QEMU_HARDDISK_6baf8ee8-9e1e-44df-871f-0b875401fb68-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-10-08 15:49:51.198099 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-10-08-14-55-35-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-10-08 15:49:51.198117 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:49:51.198128 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-08 15:49:51.198146 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-08 15:49:51.198171 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-08 15:49:51.198181 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-08 15:49:51.198191 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-08 15:49:51.198201 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-08 15:49:51.198211 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-08 15:49:51.198221 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-08 15:49:51.198251 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ddfc7ab9-c1a9-4ba0-9d6e-381eb20f74ec', 'scsi-SQEMU_QEMU_HARDDISK_ddfc7ab9-c1a9-4ba0-9d6e-381eb20f74ec'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ddfc7ab9-c1a9-4ba0-9d6e-381eb20f74ec-part1', 'scsi-SQEMU_QEMU_HARDDISK_ddfc7ab9-c1a9-4ba0-9d6e-381eb20f74ec-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ddfc7ab9-c1a9-4ba0-9d6e-381eb20f74ec-part14', 'scsi-SQEMU_QEMU_HARDDISK_ddfc7ab9-c1a9-4ba0-9d6e-381eb20f74ec-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ddfc7ab9-c1a9-4ba0-9d6e-381eb20f74ec-part15', 'scsi-SQEMU_QEMU_HARDDISK_ddfc7ab9-c1a9-4ba0-9d6e-381eb20f74ec-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ddfc7ab9-c1a9-4ba0-9d6e-381eb20f74ec-part16', 'scsi-SQEMU_QEMU_HARDDISK_ddfc7ab9-c1a9-4ba0-9d6e-381eb20f74ec-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-10-08 15:49:51.198271 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-10-08-14-55-43-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-10-08 15:49:51.198281 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:49:51.198292 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-08 15:49:51.198302 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-08 15:49:51.198312 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-08 15:49:51.198322 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-08 15:49:51.198341 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-08 15:49:51.198358 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-08 15:49:51.198368 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-08 15:49:51.198378 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-08 15:49:51.198389 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_704d8bac-66ec-438a-bbe7-82d4aba4ca14', 'scsi-SQEMU_QEMU_HARDDISK_704d8bac-66ec-438a-bbe7-82d4aba4ca14'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_704d8bac-66ec-438a-bbe7-82d4aba4ca14-part1', 'scsi-SQEMU_QEMU_HARDDISK_704d8bac-66ec-438a-bbe7-82d4aba4ca14-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_704d8bac-66ec-438a-bbe7-82d4aba4ca14-part14', 'scsi-SQEMU_QEMU_HARDDISK_704d8bac-66ec-438a-bbe7-82d4aba4ca14-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_704d8bac-66ec-438a-bbe7-82d4aba4ca14-part15', 'scsi-SQEMU_QEMU_HARDDISK_704d8bac-66ec-438a-bbe7-82d4aba4ca14-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_704d8bac-66ec-438a-bbe7-82d4aba4ca14-part16', 'scsi-SQEMU_QEMU_HARDDISK_704d8bac-66ec-438a-bbe7-82d4aba4ca14-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-10-08 15:49:51.199412 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-10-08-14-55-46-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-10-08 15:49:51.199530 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--25f30e7b--7b9e--5d46--b3fc--d4cb59f24626-osd--block--25f30e7b--7b9e--5d46--b3fc--d4cb59f24626', 'dm-uuid-LVM-IwX6ZkXLUCl0YcA4BzLjokZDOeJv2HrfYybBcJHxwkas2gpDO9dJKVm8PTbnaZDM'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-10-08 15:49:51.199544 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ff85ad2a--1d5d--50f9--b3a7--2f1eee54f485-osd--block--ff85ad2a--1d5d--50f9--b3a7--2f1eee54f485', 'dm-uuid-LVM-ed9o0GNO7PQg5svVWsXAoj031P8dkr3TFUwcML7pXDRFpwBAi01fbqUdVpwW93hA'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-10-08 15:49:51.199554 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-08 15:49:51.199565 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:49:51.199576 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-08 15:49:51.199587 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7ac75f6e--526f--52f0--b624--7532d6099aef-osd--block--7ac75f6e--526f--52f0--b624--7532d6099aef', 'dm-uuid-LVM-lj2Vpg6qcUbLutvAn92lW9fRMiCop0a96nZpb0XQFL6FwSAuZUWe4yMqwLGh1MzJ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-10-08 15:49:51.199597 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-08 15:49:51.199607 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--bafbc9f1--844e--58d3--a294--acb7fdea1516-osd--block--bafbc9f1--844e--58d3--a294--acb7fdea1516', 'dm-uuid-LVM-3ZOGqWctD4o6vg0odPTHCuhke8CUDp1zHHUOc7hjGx9N4xgfXu78V9LfnkitzdkG'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-10-08 15:49:51.199639 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-08 15:49:51.199657 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-08 15:49:51.199668 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-08 15:49:51.199677 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-08 15:49:51.199687 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-08 15:49:51.199696 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-08 15:49:51.199706 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-08 15:49:51.199715 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-08 15:49:51.199725 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-08 15:49:51.199734 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-08 15:49:51.199765 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a79e8257-b31c-4b26-9d3c-62ccc1082da3', 'scsi-SQEMU_QEMU_HARDDISK_a79e8257-b31c-4b26-9d3c-62ccc1082da3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a79e8257-b31c-4b26-9d3c-62ccc1082da3-part1', 'scsi-SQEMU_QEMU_HARDDISK_a79e8257-b31c-4b26-9d3c-62ccc1082da3-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a79e8257-b31c-4b26-9d3c-62ccc1082da3-part14', 'scsi-SQEMU_QEMU_HARDDISK_a79e8257-b31c-4b26-9d3c-62ccc1082da3-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a79e8257-b31c-4b26-9d3c-62ccc1082da3-part15', 'scsi-SQEMU_QEMU_HARDDISK_a79e8257-b31c-4b26-9d3c-62ccc1082da3-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a79e8257-b31c-4b26-9d3c-62ccc1082da3-part16', 'scsi-SQEMU_QEMU_HARDDISK_a79e8257-b31c-4b26-9d3c-62ccc1082da3-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-10-08 15:49:51.199777 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-08 15:49:51.199788 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--7ac75f6e--526f--52f0--b624--7532d6099aef-osd--block--7ac75f6e--526f--52f0--b624--7532d6099aef'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-B6burn-l0HK-pmKM-ZLX8-pUWb-meyy-cLIfXf', 'scsi-0QEMU_QEMU_HARDDISK_8931d93f-304b-4b68-94eb-87cca6c6eade', 'scsi-SQEMU_QEMU_HARDDISK_8931d93f-304b-4b68-94eb-87cca6c6eade'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-10-08 15:49:51.199800 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--bafbc9f1--844e--58d3--a294--acb7fdea1516-osd--block--bafbc9f1--844e--58d3--a294--acb7fdea1516'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-M9WfaV-xFLb-hgB4-M0gh-vWdP-WQMT-J5KorF', 'scsi-0QEMU_QEMU_HARDDISK_f279d016-8061-4f1a-b5de-972c25793956', 'scsi-SQEMU_QEMU_HARDDISK_f279d016-8061-4f1a-b5de-972c25793956'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-10-08 15:49:51.199826 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-08 15:49:51.199837 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5c2995ee-2a0f-4f5c-ac7c-066cefbff021', 'scsi-SQEMU_QEMU_HARDDISK_5c2995ee-2a0f-4f5c-ac7c-066cefbff021'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-10-08 15:49:51.199849 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-08 15:49:51.199859 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-10-08-14-55-37-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-10-08 15:49:51.199870 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d5106b6d-a2b1-46e7-8a70-2ffa10fa4fd8', 'scsi-SQEMU_QEMU_HARDDISK_d5106b6d-a2b1-46e7-8a70-2ffa10fa4fd8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d5106b6d-a2b1-46e7-8a70-2ffa10fa4fd8-part1', 'scsi-SQEMU_QEMU_HARDDISK_d5106b6d-a2b1-46e7-8a70-2ffa10fa4fd8-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d5106b6d-a2b1-46e7-8a70-2ffa10fa4fd8-part14', 'scsi-SQEMU_QEMU_HARDDISK_d5106b6d-a2b1-46e7-8a70-2ffa10fa4fd8-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d5106b6d-a2b1-46e7-8a70-2ffa10fa4fd8-part15', 'scsi-SQEMU_QEMU_HARDDISK_d5106b6d-a2b1-46e7-8a70-2ffa10fa4fd8-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d5106b6d-a2b1-46e7-8a70-2ffa10fa4fd8-part16', 'scsi-SQEMU_QEMU_HARDDISK_d5106b6d-a2b1-46e7-8a70-2ffa10fa4fd8-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-10-08 15:49:51.199895 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--25f30e7b--7b9e--5d46--b3fc--d4cb59f24626-osd--block--25f30e7b--7b9e--5d46--b3fc--d4cb59f24626'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-QF2svk-J06h-RNzj-e4X5-ESi8-uVgE-VzL6nT', 'scsi-0QEMU_QEMU_HARDDISK_e7b164cd-18c4-4443-b153-66ef822cc182', 'scsi-SQEMU_QEMU_HARDDISK_e7b164cd-18c4-4443-b153-66ef822cc182'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-10-08 15:49:51.199906 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--ff85ad2a--1d5d--50f9--b3a7--2f1eee54f485-osd--block--ff85ad2a--1d5d--50f9--b3a7--2f1eee54f485'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-RccNBU-9RDr-ipRC-Qiuy-lZ6U-9BDk-CYheJR', 'scsi-0QEMU_QEMU_HARDDISK_501046cd-0181-4267-8e39-455d7db25dff', 'scsi-SQEMU_QEMU_HARDDISK_501046cd-0181-4267-8e39-455d7db25dff'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-10-08 15:49:51.199916 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:49:51.199927 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b2b0a7c4-684b-467f-bea3-a2180df0d298', 'scsi-SQEMU_QEMU_HARDDISK_b2b0a7c4-684b-467f-bea3-a2180df0d298'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-10-08 15:49:51.199938 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-10-08-14-55-41-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-10-08 15:49:51.199948 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.199959 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--93919d76--3b82--5996--a675--e75a55626347-osd--block--93919d76--3b82--5996--a675--e75a55626347', 'dm-uuid-LVM-sYScueFnQoEDbsAFWAMa6spsgAc8xeDuz9awT2ffDFq9jBwUbEXZdoMBKRGYttOs'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-10-08 15:49:51.199975 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--cead9db5--2c40--515a--bcee--782342d5bd60-osd--block--cead9db5--2c40--515a--bcee--782342d5bd60', 'dm-uuid-LVM-o1NfynOYwuMd33uDEG4GydJoD5Cdujl5dFhpNQiswlXX3LIRayJouByUEan5FOcP'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-10-08 15:49:51.199994 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-08 15:49:51.200006 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-08 15:49:51.200017 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-08 15:49:51.200028 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-08 15:49:51.200038 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-08 15:49:51.200049 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-08 15:49:51.200060 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-08 15:49:51.200070 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-08 15:49:51.200098 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1cc5cf69-2914-42bf-9b1b-88c775b3ec52', 'scsi-SQEMU_QEMU_HARDDISK_1cc5cf69-2914-42bf-9b1b-88c775b3ec52'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1cc5cf69-2914-42bf-9b1b-88c775b3ec52-part1', 'scsi-SQEMU_QEMU_HARDDISK_1cc5cf69-2914-42bf-9b1b-88c775b3ec52-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1cc5cf69-2914-42bf-9b1b-88c775b3ec52-part14', 'scsi-SQEMU_QEMU_HARDDISK_1cc5cf69-2914-42bf-9b1b-88c775b3ec52-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1cc5cf69-2914-42bf-9b1b-88c775b3ec52-part15', 'scsi-SQEMU_QEMU_HARDDISK_1cc5cf69-2914-42bf-9b1b-88c775b3ec52-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1cc5cf69-2914-42bf-9b1b-88c775b3ec52-part16', 'scsi-SQEMU_QEMU_HARDDISK_1cc5cf69-2914-42bf-9b1b-88c775b3ec52-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-10-08 15:49:51.200111 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--93919d76--3b82--5996--a675--e75a55626347-osd--block--93919d76--3b82--5996--a675--e75a55626347'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-cZsI5k-hEKk-uygP-Y5a1-Sval-EoXQ-6fgOoA', 'scsi-0QEMU_QEMU_HARDDISK_96ffedcc-0414-421a-b44e-b183e9db41fd', 'scsi-SQEMU_QEMU_HARDDISK_96ffedcc-0414-421a-b44e-b183e9db41fd'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-10-08 15:49:51.200123 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--cead9db5--2c40--515a--bcee--782342d5bd60-osd--block--cead9db5--2c40--515a--bcee--782342d5bd60'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-1gR1qk-cV0S-VAjr-plUs-5yns-7rtf-ve1FK3', 'scsi-0QEMU_QEMU_HARDDISK_d1c70491-aba4-4ff7-8b88-cbd07cfcddb1', 'scsi-SQEMU_QEMU_HARDDISK_d1c70491-aba4-4ff7-8b88-cbd07cfcddb1'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-10-08 15:49:51.200144 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1a05d404-282d-4245-ac59-6a85ac73ef0f', 'scsi-SQEMU_QEMU_HARDDISK_1a05d404-282d-4245-ac59-6a85ac73ef0f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-10-08 15:49:51.200181 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-10-08-14-55-39-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-10-08 15:49:51.200197 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:49:51.200207 | orchestrator | 2025-10-08 15:49:51.200219 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-10-08 15:49:51.200229 | orchestrator | Wednesday 08 October 2025 15:39:35 +0000 (0:00:01.724) 0:00:32.642 ***** 2025-10-08 15:49:51.200245 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-08 15:49:51.200255 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-08 15:49:51.200266 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-08 15:49:51.200276 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-08 15:49:51.200293 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-08 15:49:51.200302 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-08 15:49:51.200321 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-08 15:49:51.200331 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-08 15:49:51.200341 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6baf8ee8-9e1e-44df-871f-0b875401fb68', 'scsi-SQEMU_QEMU_HARDDISK_6baf8ee8-9e1e-44df-871f-0b875401fb68'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6baf8ee8-9e1e-44df-871f-0b875401fb68-part1', 'scsi-SQEMU_QEMU_HARDDISK_6baf8ee8-9e1e-44df-871f-0b875401fb68-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6baf8ee8-9e1e-44df-871f-0b875401fb68-part14', 'scsi-SQEMU_QEMU_HARDDISK_6baf8ee8-9e1e-44df-871f-0b875401fb68-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6baf8ee8-9e1e-44df-871f-0b875401fb68-part15', 'scsi-SQEMU_QEMU_HARDDISK_6baf8ee8-9e1e-44df-871f-0b875401fb68-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6baf8ee8-9e1e-44df-871f-0b875401fb68-part16', 'scsi-SQEMU_QEMU_HARDDISK_6baf8ee8-9e1e-44df-871f-0b875401fb68-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-08 15:49:51.200361 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-10-08-14-55-35-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-08 15:49:51.200375 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-08 15:49:51.200385 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-08 15:49:51.200394 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-08 15:49:51.200403 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-08 15:49:51.200418 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-08 15:49:51.200427 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-08 15:49:51.200441 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-08 15:49:51.200454 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-08 15:49:51.200464 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ddfc7ab9-c1a9-4ba0-9d6e-381eb20f74ec', 'scsi-SQEMU_QEMU_HARDDISK_ddfc7ab9-c1a9-4ba0-9d6e-381eb20f74ec'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ddfc7ab9-c1a9-4ba0-9d6e-381eb20f74ec-part1', 'scsi-SQEMU_QEMU_HARDDISK_ddfc7ab9-c1a9-4ba0-9d6e-381eb20f74ec-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ddfc7ab9-c1a9-4ba0-9d6e-381eb20f74ec-part14', 'scsi-SQEMU_QEMU_HARDDISK_ddfc7ab9-c1a9-4ba0-9d6e-381eb20f74ec-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ddfc7ab9-c1a9-4ba0-9d6e-381eb20f74ec-part15', 'scsi-SQEMU_QEMU_HARDDISK_ddfc7ab9-c1a9-4ba0-9d6e-381eb20f74ec-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ddfc7ab9-c1a9-4ba0-9d6e-381eb20f74ec-part16', 'scsi-SQEMU_QEMU_HARDDISK_ddfc7ab9-c1a9-4ba0-9d6e-381eb20f74ec-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-08 15:49:51.200479 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-10-08-14-55-43-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-08 15:49:51.200488 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:49:51.200506 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-08 15:49:51.200516 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-08 15:49:51.200525 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-08 15:49:51.200534 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-08 15:49:51.200548 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:49:51.200558 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-08 15:49:51.200567 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-08 15:49:51.200580 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-08 15:49:51.200594 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-08 15:49:51.200603 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--25f30e7b--7b9e--5d46--b3fc--d4cb59f24626-osd--block--25f30e7b--7b9e--5d46--b3fc--d4cb59f24626', 'dm-uuid-LVM-IwX6ZkXLUCl0YcA4BzLjokZDOeJv2HrfYybBcJHxwkas2gpDO9dJKVm8PTbnaZDM'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-08 15:49:51.200613 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ff85ad2a--1d5d--50f9--b3a7--2f1eee54f485-osd--block--ff85ad2a--1d5d--50f9--b3a7--2f1eee54f485', 'dm-uuid-LVM-ed9o0GNO7PQg5svVWsXAoj031P8dkr3TFUwcML7pXDRFpwBAi01fbqUdVpwW93hA'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-08 15:49:51.200627 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-08 15:49:51.200636 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-08 15:49:51.200657 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_704d8bac-66ec-438a-bbe7-82d4aba4ca14', 'scsi-SQEMU_QEMU_HARDDISK_704d8bac-66ec-438a-bbe7-82d4aba4ca14'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_704d8bac-66ec-438a-bbe7-82d4aba4ca14-part1', 'scsi-SQEMU_QEMU_HARDDISK_704d8bac-66ec-438a-bbe7-82d4aba4ca14-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_704d8bac-66ec-438a-bbe7-82d4aba4ca14-part14', 'scsi-SQEMU_QEMU_HARDDISK_704d8bac-66ec-438a-bbe7-82d4aba4ca14-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_704d8bac-66ec-438a-bbe7-82d4aba4ca14-part15', 'scsi-SQEMU_QEMU_HARDDISK_704d8bac-66ec-438a-bbe7-82d4aba4ca14-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_704d8bac-66ec-438a-bbe7-82d4aba4ca14-part16', 'scsi-SQEMU_QEMU_HARDDISK_704d8bac-66ec-438a-bbe7-82d4aba4ca14-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-08 15:49:51.200676 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-08 15:49:51.200685 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-10-08-14-55-46-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-08 15:49:51.200694 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-08 15:49:51.200716 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-08 15:49:51.200726 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-08 15:49:51.200735 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-08 15:49:51.200744 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-08 15:49:51.200770 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d5106b6d-a2b1-46e7-8a70-2ffa10fa4fd8', 'scsi-SQEMU_QEMU_HARDDISK_d5106b6d-a2b1-46e7-8a70-2ffa10fa4fd8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d5106b6d-a2b1-46e7-8a70-2ffa10fa4fd8-part1', 'scsi-SQEMU_QEMU_HARDDISK_d5106b6d-a2b1-46e7-8a70-2ffa10fa4fd8-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d5106b6d-a2b1-46e7-8a70-2ffa10fa4fd8-part14', 'scsi-SQEMU_QEMU_HARDDISK_d5106b6d-a2b1-46e7-8a70-2ffa10fa4fd8-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d5106b6d-a2b1-46e7-8a70-2ffa10fa4fd8-part15', 'scsi-SQEMU_QEMU_HARDDISK_d5106b6d-a2b1-46e7-8a70-2ffa10fa4fd8-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d5106b6d-a2b1-46e7-8a70-2ffa10fa4fd8-part16', 'scsi-SQEMU_QEMU_HARDDISK_d5106b6d-a2b1-46e7-8a70-2ffa10fa4fd8-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-08 15:49:51.200781 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7ac75f6e--526f--52f0--b624--7532d6099aef-osd--block--7ac75f6e--526f--52f0--b624--7532d6099aef', 'dm-uuid-LVM-lj2Vpg6qcUbLutvAn92lW9fRMiCop0a96nZpb0XQFL6FwSAuZUWe4yMqwLGh1MzJ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-08 15:49:51.200791 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--25f30e7b--7b9e--5d46--b3fc--d4cb59f24626-osd--block--25f30e7b--7b9e--5d46--b3fc--d4cb59f24626'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-QF2svk-J06h-RNzj-e4X5-ESi8-uVgE-VzL6nT', 'scsi-0QEMU_QEMU_HARDDISK_e7b164cd-18c4-4443-b153-66ef822cc182', 'scsi-SQEMU_QEMU_HARDDISK_e7b164cd-18c4-4443-b153-66ef822cc182'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-08 15:49:51.200807 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--ff85ad2a--1d5d--50f9--b3a7--2f1eee54f485-osd--block--ff85ad2a--1d5d--50f9--b3a7--2f1eee54f485'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-RccNBU-9RDr-ipRC-Qiuy-lZ6U-9BDk-CYheJR', 'scsi-0QEMU_QEMU_HARDDISK_501046cd-0181-4267-8e39-455d7db25dff', 'scsi-SQEMU_QEMU_HARDDISK_501046cd-0181-4267-8e39-455d7db25dff'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-08 15:49:51.200816 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b2b0a7c4-684b-467f-bea3-a2180df0d298', 'scsi-SQEMU_QEMU_HARDDISK_b2b0a7c4-684b-467f-bea3-a2180df0d298'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-08 15:49:51.200991 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--bafbc9f1--844e--58d3--a294--acb7fdea1516-osd--block--bafbc9f1--844e--58d3--a294--acb7fdea1516', 'dm-uuid-LVM-3ZOGqWctD4o6vg0odPTHCuhke8CUDp1zHHUOc7hjGx9N4xgfXu78V9LfnkitzdkG'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-08 15:49:51.201008 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-08 15:49:51.201017 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-10-08-14-55-41-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-08 15:49:51.201035 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-08 15:49:51.201044 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-08 15:49:51.201053 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-08 15:49:51.201062 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:49:51.201082 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-08 15:49:51.201092 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.201101 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-08 15:49:51.201110 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-08 15:49:51.201124 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-08 15:49:51.201144 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a79e8257-b31c-4b26-9d3c-62ccc1082da3', 'scsi-SQEMU_QEMU_HARDDISK_a79e8257-b31c-4b26-9d3c-62ccc1082da3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a79e8257-b31c-4b26-9d3c-62ccc1082da3-part1', 'scsi-SQEMU_QEMU_HARDDISK_a79e8257-b31c-4b26-9d3c-62ccc1082da3-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a79e8257-b31c-4b26-9d3c-62ccc1082da3-part14', 'scsi-SQEMU_QEMU_HARDDISK_a79e8257-b31c-4b26-9d3c-62ccc1082da3-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a79e8257-b31c-4b26-9d3c-62ccc1082da3-part15', 'scsi-SQEMU_QEMU_HARDDISK_a79e8257-b31c-4b26-9d3c-62ccc1082da3-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a79e8257-b31c-4b26-9d3c-62ccc1082da3-part16', 'scsi-SQEMU_QEMU_HARDDISK_a79e8257-b31c-4b26-9d3c-62ccc1082da3-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-08 15:49:51.201193 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--7ac75f6e--526f--52f0--b624--7532d6099aef-osd--block--7ac75f6e--526f--52f0--b624--7532d6099aef'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-B6burn-l0HK-pmKM-ZLX8-pUWb-meyy-cLIfXf', 'scsi-0QEMU_QEMU_HARDDISK_8931d93f-304b-4b68-94eb-87cca6c6eade', 'scsi-SQEMU_QEMU_HARDDISK_8931d93f-304b-4b68-94eb-87cca6c6eade'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-08 15:49:51.201220 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--bafbc9f1--844e--58d3--a294--acb7fdea1516-osd--block--bafbc9f1--844e--58d3--a294--acb7fdea1516'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-M9WfaV-xFLb-hgB4-M0gh-vWdP-WQMT-J5KorF', 'scsi-0QEMU_QEMU_HARDDISK_f279d016-8061-4f1a-b5de-972c25793956', 'scsi-SQEMU_QEMU_HARDDISK_f279d016-8061-4f1a-b5de-972c25793956'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-08 15:49:51.201234 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5c2995ee-2a0f-4f5c-ac7c-066cefbff021', 'scsi-SQEMU_QEMU_HARDDISK_5c2995ee-2a0f-4f5c-ac7c-066cefbff021'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-08 15:49:51.201249 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--93919d76--3b82--5996--a675--e75a55626347-osd--block--93919d76--3b82--5996--a675--e75a55626347', 'dm-uuid-LVM-sYScueFnQoEDbsAFWAMa6spsgAc8xeDuz9awT2ffDFq9jBwUbEXZdoMBKRGYttOs'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-08 15:49:51.201270 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-10-08-14-55-37-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-08 15:49:51.201279 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--cead9db5--2c40--515a--bcee--782342d5bd60-osd--block--cead9db5--2c40--515a--bcee--782342d5bd60', 'dm-uuid-LVM-o1NfynOYwuMd33uDEG4GydJoD5Cdujl5dFhpNQiswlXX3LIRayJouByUEan5FOcP'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-08 15:49:51.201295 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:49:51.201304 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-08 15:49:51.201313 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-08 15:49:51.201322 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-08 15:49:51.201331 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-08 15:49:51.201350 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-08 15:49:51.201360 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-08 15:49:51.201374 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-08 15:49:51.201383 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-08 15:49:51.201404 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1cc5cf69-2914-42bf-9b1b-88c775b3ec52', 'scsi-SQEMU_QEMU_HARDDISK_1cc5cf69-2914-42bf-9b1b-88c775b3ec52'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1cc5cf69-2914-42bf-9b1b-88c775b3ec52-part1', 'scsi-SQEMU_QEMU_HARDDISK_1cc5cf69-2914-42bf-9b1b-88c775b3ec52-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1cc5cf69-2914-42bf-9b1b-88c775b3ec52-part14', 'scsi-SQEMU_QEMU_HARDDISK_1cc5cf69-2914-42bf-9b1b-88c775b3ec52-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1cc5cf69-2914-42bf-9b1b-88c775b3ec52-part15', 'scsi-SQEMU_QEMU_HARDDISK_1cc5cf69-2914-42bf-9b1b-88c775b3ec52-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1cc5cf69-2914-42bf-9b1b-88c775b3ec52-part16', 'scsi-SQEMU_QEMU_HARDDISK_1cc5cf69-2914-42bf-9b1b-88c775b3ec52-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-08 15:49:51.201415 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--93919d76--3b82--5996--a675--e75a55626347-osd--block--93919d76--3b82--5996--a675--e75a55626347'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-cZsI5k-hEKk-uygP-Y5a1-Sval-EoXQ-6fgOoA', 'scsi-0QEMU_QEMU_HARDDISK_96ffedcc-0414-421a-b44e-b183e9db41fd', 'scsi-SQEMU_QEMU_HARDDISK_96ffedcc-0414-421a-b44e-b183e9db41fd'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-08 15:49:51.201434 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--cead9db5--2c40--515a--bcee--782342d5bd60-osd--block--cead9db5--2c40--515a--bcee--782342d5bd60'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-1gR1qk-cV0S-VAjr-plUs-5yns-7rtf-ve1FK3', 'scsi-0QEMU_QEMU_HARDDISK_d1c70491-aba4-4ff7-8b88-cbd07cfcddb1', 'scsi-SQEMU_QEMU_HARDDISK_d1c70491-aba4-4ff7-8b88-cbd07cfcddb1'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-08 15:49:51.201443 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1a05d404-282d-4245-ac59-6a85ac73ef0f', 'scsi-SQEMU_QEMU_HARDDISK_1a05d404-282d-4245-ac59-6a85ac73ef0f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-08 15:49:51.201452 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-10-08-14-55-39-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-08 15:49:51.201461 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:49:51.201470 | orchestrator | 2025-10-08 15:49:51.201479 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-10-08 15:49:51.201488 | orchestrator | Wednesday 08 October 2025 15:39:37 +0000 (0:00:01.877) 0:00:34.519 ***** 2025-10-08 15:49:51.201497 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:49:51.201506 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:49:51.201515 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:49:51.201528 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:49:51.201537 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:49:51.201545 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:49:51.201554 | orchestrator | 2025-10-08 15:49:51.201568 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-10-08 15:49:51.201580 | orchestrator | Wednesday 08 October 2025 15:39:38 +0000 (0:00:01.418) 0:00:35.938 ***** 2025-10-08 15:49:51.201589 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:49:51.201599 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:49:51.201609 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:49:51.201618 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:49:51.201627 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:49:51.201636 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:49:51.201646 | orchestrator | 2025-10-08 15:49:51.201656 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-10-08 15:49:51.201665 | orchestrator | Wednesday 08 October 2025 15:39:39 +0000 (0:00:00.937) 0:00:36.875 ***** 2025-10-08 15:49:51.201675 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:49:51.201685 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:49:51.201694 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:49:51.201704 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.201713 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:49:51.201723 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:49:51.201732 | orchestrator | 2025-10-08 15:49:51.201742 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-10-08 15:49:51.201751 | orchestrator | Wednesday 08 October 2025 15:39:41 +0000 (0:00:01.172) 0:00:38.048 ***** 2025-10-08 15:49:51.201761 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:49:51.201771 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:49:51.201780 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:49:51.201789 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.201798 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:49:51.201808 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:49:51.201817 | orchestrator | 2025-10-08 15:49:51.201827 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-10-08 15:49:51.201837 | orchestrator | Wednesday 08 October 2025 15:39:41 +0000 (0:00:00.697) 0:00:38.745 ***** 2025-10-08 15:49:51.201846 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:49:51.201855 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:49:51.201865 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:49:51.201874 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.201884 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:49:51.201893 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:49:51.201902 | orchestrator | 2025-10-08 15:49:51.201912 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-10-08 15:49:51.201921 | orchestrator | Wednesday 08 October 2025 15:39:42 +0000 (0:00:01.090) 0:00:39.836 ***** 2025-10-08 15:49:51.201931 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:49:51.201940 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:49:51.201949 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:49:51.201958 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.201966 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:49:51.201975 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:49:51.201983 | orchestrator | 2025-10-08 15:49:51.201992 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-10-08 15:49:51.202001 | orchestrator | Wednesday 08 October 2025 15:39:43 +0000 (0:00:00.880) 0:00:40.717 ***** 2025-10-08 15:49:51.202009 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-10-08 15:49:51.202051 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2025-10-08 15:49:51.202061 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2025-10-08 15:49:51.202070 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-10-08 15:49:51.202078 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2025-10-08 15:49:51.202087 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-10-08 15:49:51.202095 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2025-10-08 15:49:51.202104 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2025-10-08 15:49:51.202118 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-10-08 15:49:51.202127 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-10-08 15:49:51.202135 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-10-08 15:49:51.202144 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-10-08 15:49:51.202172 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2025-10-08 15:49:51.202182 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-10-08 15:49:51.202190 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-10-08 15:49:51.202199 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-10-08 15:49:51.202208 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-10-08 15:49:51.202216 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-10-08 15:49:51.202225 | orchestrator | 2025-10-08 15:49:51.202241 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-10-08 15:49:51.202250 | orchestrator | Wednesday 08 October 2025 15:39:46 +0000 (0:00:03.268) 0:00:43.985 ***** 2025-10-08 15:49:51.202258 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-10-08 15:49:51.202267 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-10-08 15:49:51.202275 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-10-08 15:49:51.202284 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:49:51.202292 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-10-08 15:49:51.202301 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-10-08 15:49:51.202309 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-10-08 15:49:51.202317 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-10-08 15:49:51.202326 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-10-08 15:49:51.202334 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-10-08 15:49:51.202343 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:49:51.202352 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-10-08 15:49:51.202373 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-10-08 15:49:51.202382 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-10-08 15:49:51.202390 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:49:51.202399 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-10-08 15:49:51.202413 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-10-08 15:49:51.202422 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-10-08 15:49:51.202430 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.202439 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:49:51.202448 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-10-08 15:49:51.202457 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-10-08 15:49:51.202465 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-10-08 15:49:51.202474 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:49:51.202482 | orchestrator | 2025-10-08 15:49:51.202491 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-10-08 15:49:51.202500 | orchestrator | Wednesday 08 October 2025 15:39:48 +0000 (0:00:01.295) 0:00:45.280 ***** 2025-10-08 15:49:51.202508 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:49:51.202517 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:49:51.202526 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:49:51.202535 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-10-08 15:49:51.202543 | orchestrator | 2025-10-08 15:49:51.202553 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-10-08 15:49:51.202561 | orchestrator | Wednesday 08 October 2025 15:39:49 +0000 (0:00:01.413) 0:00:46.694 ***** 2025-10-08 15:49:51.202576 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.202585 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:49:51.202594 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:49:51.202603 | orchestrator | 2025-10-08 15:49:51.202611 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-10-08 15:49:51.202620 | orchestrator | Wednesday 08 October 2025 15:39:50 +0000 (0:00:00.383) 0:00:47.078 ***** 2025-10-08 15:49:51.202628 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.202637 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:49:51.202646 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:49:51.202654 | orchestrator | 2025-10-08 15:49:51.202663 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-10-08 15:49:51.202672 | orchestrator | Wednesday 08 October 2025 15:39:50 +0000 (0:00:00.700) 0:00:47.778 ***** 2025-10-08 15:49:51.202680 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.202689 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:49:51.202698 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:49:51.202706 | orchestrator | 2025-10-08 15:49:51.202715 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-10-08 15:49:51.202724 | orchestrator | Wednesday 08 October 2025 15:39:51 +0000 (0:00:00.791) 0:00:48.570 ***** 2025-10-08 15:49:51.202733 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:49:51.202742 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:49:51.202750 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:49:51.202759 | orchestrator | 2025-10-08 15:49:51.202767 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-10-08 15:49:51.202776 | orchestrator | Wednesday 08 October 2025 15:39:52 +0000 (0:00:00.969) 0:00:49.539 ***** 2025-10-08 15:49:51.202785 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-10-08 15:49:51.202794 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-10-08 15:49:51.202803 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-10-08 15:49:51.202811 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.202820 | orchestrator | 2025-10-08 15:49:51.202829 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-10-08 15:49:51.202837 | orchestrator | Wednesday 08 October 2025 15:39:52 +0000 (0:00:00.395) 0:00:49.934 ***** 2025-10-08 15:49:51.202846 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-10-08 15:49:51.202855 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-10-08 15:49:51.202863 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-10-08 15:49:51.202872 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.202881 | orchestrator | 2025-10-08 15:49:51.202890 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-10-08 15:49:51.202899 | orchestrator | Wednesday 08 October 2025 15:39:53 +0000 (0:00:00.384) 0:00:50.318 ***** 2025-10-08 15:49:51.202907 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-10-08 15:49:51.202916 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-10-08 15:49:51.202925 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-10-08 15:49:51.202933 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.202942 | orchestrator | 2025-10-08 15:49:51.202951 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-10-08 15:49:51.202959 | orchestrator | Wednesday 08 October 2025 15:39:54 +0000 (0:00:00.804) 0:00:51.123 ***** 2025-10-08 15:49:51.202968 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:49:51.202977 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:49:51.202985 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:49:51.202994 | orchestrator | 2025-10-08 15:49:51.203003 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-10-08 15:49:51.203011 | orchestrator | Wednesday 08 October 2025 15:39:54 +0000 (0:00:00.524) 0:00:51.648 ***** 2025-10-08 15:49:51.203020 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-10-08 15:49:51.203034 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-10-08 15:49:51.203042 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-10-08 15:49:51.203051 | orchestrator | 2025-10-08 15:49:51.203060 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-10-08 15:49:51.203069 | orchestrator | Wednesday 08 October 2025 15:39:55 +0000 (0:00:00.935) 0:00:52.584 ***** 2025-10-08 15:49:51.203082 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-10-08 15:49:51.203091 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-10-08 15:49:51.203100 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-10-08 15:49:51.203113 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-10-08 15:49:51.203121 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-10-08 15:49:51.203130 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-10-08 15:49:51.203139 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-10-08 15:49:51.203148 | orchestrator | 2025-10-08 15:49:51.203174 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-10-08 15:49:51.203184 | orchestrator | Wednesday 08 October 2025 15:39:57 +0000 (0:00:01.788) 0:00:54.372 ***** 2025-10-08 15:49:51.203192 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-10-08 15:49:51.203201 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-10-08 15:49:51.203210 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-10-08 15:49:51.203219 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-10-08 15:49:51.203227 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-10-08 15:49:51.203236 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-10-08 15:49:51.203244 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-10-08 15:49:51.203253 | orchestrator | 2025-10-08 15:49:51.203261 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-10-08 15:49:51.203270 | orchestrator | Wednesday 08 October 2025 15:39:59 +0000 (0:00:02.594) 0:00:56.967 ***** 2025-10-08 15:49:51.203279 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-08 15:49:51.203289 | orchestrator | 2025-10-08 15:49:51.203297 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-10-08 15:49:51.203306 | orchestrator | Wednesday 08 October 2025 15:40:01 +0000 (0:00:01.552) 0:00:58.519 ***** 2025-10-08 15:49:51.203315 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-08 15:49:51.203324 | orchestrator | 2025-10-08 15:49:51.203332 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-10-08 15:49:51.203341 | orchestrator | Wednesday 08 October 2025 15:40:02 +0000 (0:00:01.273) 0:00:59.793 ***** 2025-10-08 15:49:51.203349 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:49:51.203358 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.203366 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:49:51.203375 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:49:51.203383 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:49:51.203392 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:49:51.203400 | orchestrator | 2025-10-08 15:49:51.203409 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-10-08 15:49:51.203417 | orchestrator | Wednesday 08 October 2025 15:40:04 +0000 (0:00:01.777) 0:01:01.570 ***** 2025-10-08 15:49:51.203426 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:49:51.203439 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:49:51.203448 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:49:51.203457 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:49:51.203465 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:49:51.203474 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:49:51.203482 | orchestrator | 2025-10-08 15:49:51.203491 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-10-08 15:49:51.203500 | orchestrator | Wednesday 08 October 2025 15:40:06 +0000 (0:00:01.865) 0:01:03.435 ***** 2025-10-08 15:49:51.203508 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:49:51.203517 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:49:51.203526 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:49:51.203534 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:49:51.203543 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:49:51.203551 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:49:51.203560 | orchestrator | 2025-10-08 15:49:51.203568 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-10-08 15:49:51.203577 | orchestrator | Wednesday 08 October 2025 15:40:08 +0000 (0:00:02.032) 0:01:05.467 ***** 2025-10-08 15:49:51.203585 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:49:51.203594 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:49:51.203603 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:49:51.203611 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:49:51.203620 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:49:51.203628 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:49:51.203637 | orchestrator | 2025-10-08 15:49:51.203645 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-10-08 15:49:51.203654 | orchestrator | Wednesday 08 October 2025 15:40:10 +0000 (0:00:01.740) 0:01:07.208 ***** 2025-10-08 15:49:51.203662 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.203671 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:49:51.203680 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:49:51.203688 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:49:51.203696 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:49:51.203705 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:49:51.203714 | orchestrator | 2025-10-08 15:49:51.203722 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-10-08 15:49:51.203731 | orchestrator | Wednesday 08 October 2025 15:40:11 +0000 (0:00:01.144) 0:01:08.353 ***** 2025-10-08 15:49:51.203744 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:49:51.203753 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:49:51.203762 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:49:51.203770 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.203779 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:49:51.203788 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:49:51.203796 | orchestrator | 2025-10-08 15:49:51.203809 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-10-08 15:49:51.203818 | orchestrator | Wednesday 08 October 2025 15:40:12 +0000 (0:00:01.107) 0:01:09.460 ***** 2025-10-08 15:49:51.203827 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:49:51.203835 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:49:51.203844 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:49:51.203853 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.203861 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:49:51.203869 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:49:51.203878 | orchestrator | 2025-10-08 15:49:51.203887 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-10-08 15:49:51.203895 | orchestrator | Wednesday 08 October 2025 15:40:13 +0000 (0:00:00.625) 0:01:10.086 ***** 2025-10-08 15:49:51.203904 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:49:51.203913 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:49:51.203921 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:49:51.203930 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:49:51.203938 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:49:51.203953 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:49:51.203962 | orchestrator | 2025-10-08 15:49:51.203970 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-10-08 15:49:51.203979 | orchestrator | Wednesday 08 October 2025 15:40:14 +0000 (0:00:01.605) 0:01:11.691 ***** 2025-10-08 15:49:51.203988 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:49:51.203997 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:49:51.204005 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:49:51.204013 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:49:51.204022 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:49:51.204030 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:49:51.204039 | orchestrator | 2025-10-08 15:49:51.204047 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-10-08 15:49:51.204056 | orchestrator | Wednesday 08 October 2025 15:40:16 +0000 (0:00:01.364) 0:01:13.056 ***** 2025-10-08 15:49:51.204065 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:49:51.204074 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:49:51.204082 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:49:51.204091 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.204099 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:49:51.204108 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:49:51.204116 | orchestrator | 2025-10-08 15:49:51.204125 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-10-08 15:49:51.204134 | orchestrator | Wednesday 08 October 2025 15:40:17 +0000 (0:00:01.033) 0:01:14.090 ***** 2025-10-08 15:49:51.204142 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:49:51.204170 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:49:51.204180 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:49:51.204189 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.204198 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:49:51.204206 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:49:51.204215 | orchestrator | 2025-10-08 15:49:51.204223 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-10-08 15:49:51.204232 | orchestrator | Wednesday 08 October 2025 15:40:17 +0000 (0:00:00.666) 0:01:14.756 ***** 2025-10-08 15:49:51.204241 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:49:51.204249 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:49:51.204258 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:49:51.204266 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:49:51.204275 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:49:51.204283 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:49:51.204292 | orchestrator | 2025-10-08 15:49:51.204300 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-10-08 15:49:51.204309 | orchestrator | Wednesday 08 October 2025 15:40:18 +0000 (0:00:00.888) 0:01:15.645 ***** 2025-10-08 15:49:51.204317 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:49:51.204326 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:49:51.204334 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:49:51.204343 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:49:51.204351 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:49:51.204359 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:49:51.204368 | orchestrator | 2025-10-08 15:49:51.204376 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-10-08 15:49:51.204385 | orchestrator | Wednesday 08 October 2025 15:40:19 +0000 (0:00:01.030) 0:01:16.676 ***** 2025-10-08 15:49:51.204393 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:49:51.204402 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:49:51.204410 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:49:51.204419 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:49:51.204427 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:49:51.204436 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:49:51.204444 | orchestrator | 2025-10-08 15:49:51.204452 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-10-08 15:49:51.204461 | orchestrator | Wednesday 08 October 2025 15:40:21 +0000 (0:00:01.374) 0:01:18.051 ***** 2025-10-08 15:49:51.204475 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:49:51.204484 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:49:51.204492 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:49:51.204501 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.204509 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:49:51.204517 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:49:51.204526 | orchestrator | 2025-10-08 15:49:51.204534 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-10-08 15:49:51.204543 | orchestrator | Wednesday 08 October 2025 15:40:21 +0000 (0:00:00.654) 0:01:18.705 ***** 2025-10-08 15:49:51.204552 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:49:51.204560 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:49:51.204568 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:49:51.204577 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.204585 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:49:51.204594 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:49:51.204602 | orchestrator | 2025-10-08 15:49:51.204611 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-10-08 15:49:51.204624 | orchestrator | Wednesday 08 October 2025 15:40:22 +0000 (0:00:00.955) 0:01:19.660 ***** 2025-10-08 15:49:51.204633 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:49:51.204642 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:49:51.204650 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:49:51.204658 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.204667 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:49:51.204675 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:49:51.204684 | orchestrator | 2025-10-08 15:49:51.204697 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-10-08 15:49:51.204705 | orchestrator | Wednesday 08 October 2025 15:40:23 +0000 (0:00:00.721) 0:01:20.382 ***** 2025-10-08 15:49:51.204714 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:49:51.204723 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:49:51.204731 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:49:51.204740 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:49:51.204748 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:49:51.204756 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:49:51.204765 | orchestrator | 2025-10-08 15:49:51.204774 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-10-08 15:49:51.204782 | orchestrator | Wednesday 08 October 2025 15:40:24 +0000 (0:00:00.916) 0:01:21.299 ***** 2025-10-08 15:49:51.204791 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:49:51.204799 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:49:51.204808 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:49:51.204816 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:49:51.204824 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:49:51.204833 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:49:51.204841 | orchestrator | 2025-10-08 15:49:51.204850 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2025-10-08 15:49:51.204858 | orchestrator | Wednesday 08 October 2025 15:40:25 +0000 (0:00:01.677) 0:01:22.976 ***** 2025-10-08 15:49:51.204867 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:49:51.204876 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:49:51.204884 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:49:51.204893 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:49:51.204901 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:49:51.204909 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:49:51.204918 | orchestrator | 2025-10-08 15:49:51.204927 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2025-10-08 15:49:51.204935 | orchestrator | Wednesday 08 October 2025 15:40:27 +0000 (0:00:01.347) 0:01:24.324 ***** 2025-10-08 15:49:51.204944 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:49:51.204952 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:49:51.204961 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:49:51.204969 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:49:51.204983 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:49:51.204991 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:49:51.205000 | orchestrator | 2025-10-08 15:49:51.205008 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2025-10-08 15:49:51.205017 | orchestrator | Wednesday 08 October 2025 15:40:30 +0000 (0:00:02.844) 0:01:27.168 ***** 2025-10-08 15:49:51.205026 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-08 15:49:51.205034 | orchestrator | 2025-10-08 15:49:51.205043 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2025-10-08 15:49:51.205052 | orchestrator | Wednesday 08 October 2025 15:40:31 +0000 (0:00:01.030) 0:01:28.199 ***** 2025-10-08 15:49:51.205060 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:49:51.205069 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:49:51.205077 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:49:51.205086 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.205094 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:49:51.205102 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:49:51.205111 | orchestrator | 2025-10-08 15:49:51.205119 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2025-10-08 15:49:51.205128 | orchestrator | Wednesday 08 October 2025 15:40:31 +0000 (0:00:00.532) 0:01:28.731 ***** 2025-10-08 15:49:51.205136 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:49:51.205145 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:49:51.205194 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:49:51.205204 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.205213 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:49:51.205221 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:49:51.205230 | orchestrator | 2025-10-08 15:49:51.205238 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2025-10-08 15:49:51.205247 | orchestrator | Wednesday 08 October 2025 15:40:32 +0000 (0:00:00.778) 0:01:29.510 ***** 2025-10-08 15:49:51.205256 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-10-08 15:49:51.205264 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-10-08 15:49:51.205273 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-10-08 15:49:51.205281 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-10-08 15:49:51.205290 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-10-08 15:49:51.205298 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-10-08 15:49:51.205307 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-10-08 15:49:51.205315 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-10-08 15:49:51.205324 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-10-08 15:49:51.205332 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-10-08 15:49:51.205341 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-10-08 15:49:51.205349 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-10-08 15:49:51.205358 | orchestrator | 2025-10-08 15:49:51.205371 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2025-10-08 15:49:51.205380 | orchestrator | Wednesday 08 October 2025 15:40:33 +0000 (0:00:01.321) 0:01:30.832 ***** 2025-10-08 15:49:51.205389 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:49:51.205397 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:49:51.205410 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:49:51.205419 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:49:51.205435 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:49:51.205443 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:49:51.205451 | orchestrator | 2025-10-08 15:49:51.205459 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2025-10-08 15:49:51.205467 | orchestrator | Wednesday 08 October 2025 15:40:34 +0000 (0:00:01.122) 0:01:31.954 ***** 2025-10-08 15:49:51.205474 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:49:51.205482 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:49:51.205490 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:49:51.205497 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.205505 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:49:51.205513 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:49:51.205521 | orchestrator | 2025-10-08 15:49:51.205528 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2025-10-08 15:49:51.205536 | orchestrator | Wednesday 08 October 2025 15:40:35 +0000 (0:00:00.610) 0:01:32.565 ***** 2025-10-08 15:49:51.205544 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:49:51.205552 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:49:51.205559 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:49:51.205567 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.205575 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:49:51.205582 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:49:51.205590 | orchestrator | 2025-10-08 15:49:51.205598 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2025-10-08 15:49:51.205606 | orchestrator | Wednesday 08 October 2025 15:40:36 +0000 (0:00:00.869) 0:01:33.434 ***** 2025-10-08 15:49:51.205613 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:49:51.205621 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:49:51.205629 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:49:51.205637 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.205644 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:49:51.205652 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:49:51.205660 | orchestrator | 2025-10-08 15:49:51.205667 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2025-10-08 15:49:51.205675 | orchestrator | Wednesday 08 October 2025 15:40:37 +0000 (0:00:00.611) 0:01:34.046 ***** 2025-10-08 15:49:51.205683 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-08 15:49:51.205691 | orchestrator | 2025-10-08 15:49:51.205699 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2025-10-08 15:49:51.205707 | orchestrator | Wednesday 08 October 2025 15:40:38 +0000 (0:00:01.299) 0:01:35.345 ***** 2025-10-08 15:49:51.205715 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:49:51.205723 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:49:51.205731 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:49:51.205738 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:49:51.205746 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:49:51.205754 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:49:51.205762 | orchestrator | 2025-10-08 15:49:51.205769 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2025-10-08 15:49:51.205777 | orchestrator | Wednesday 08 October 2025 15:41:29 +0000 (0:00:51.477) 0:02:26.823 ***** 2025-10-08 15:49:51.205785 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-10-08 15:49:51.205793 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2025-10-08 15:49:51.205800 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2025-10-08 15:49:51.205808 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:49:51.205816 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-10-08 15:49:51.205824 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2025-10-08 15:49:51.205832 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2025-10-08 15:49:51.205844 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:49:51.205852 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-10-08 15:49:51.205860 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2025-10-08 15:49:51.205868 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2025-10-08 15:49:51.205876 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:49:51.205883 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-10-08 15:49:51.205891 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2025-10-08 15:49:51.205899 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2025-10-08 15:49:51.205907 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.205915 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-10-08 15:49:51.205922 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2025-10-08 15:49:51.205930 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2025-10-08 15:49:51.205938 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:49:51.205946 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-10-08 15:49:51.205953 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2025-10-08 15:49:51.205961 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2025-10-08 15:49:51.205973 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:49:51.205981 | orchestrator | 2025-10-08 15:49:51.205989 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2025-10-08 15:49:51.205997 | orchestrator | Wednesday 08 October 2025 15:41:30 +0000 (0:00:00.596) 0:02:27.420 ***** 2025-10-08 15:49:51.206004 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:49:51.206012 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:49:51.206052 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:49:51.206060 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.206068 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:49:51.206076 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:49:51.206084 | orchestrator | 2025-10-08 15:49:51.206092 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2025-10-08 15:49:51.206099 | orchestrator | Wednesday 08 October 2025 15:41:31 +0000 (0:00:00.678) 0:02:28.098 ***** 2025-10-08 15:49:51.206107 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:49:51.206115 | orchestrator | 2025-10-08 15:49:51.206123 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2025-10-08 15:49:51.206131 | orchestrator | Wednesday 08 October 2025 15:41:31 +0000 (0:00:00.130) 0:02:28.228 ***** 2025-10-08 15:49:51.206138 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:49:51.206146 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:49:51.206167 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:49:51.206176 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.206183 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:49:51.206191 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:49:51.206199 | orchestrator | 2025-10-08 15:49:51.206207 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2025-10-08 15:49:51.206215 | orchestrator | Wednesday 08 October 2025 15:41:31 +0000 (0:00:00.641) 0:02:28.869 ***** 2025-10-08 15:49:51.206222 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:49:51.206230 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:49:51.206238 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:49:51.206246 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.206253 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:49:51.206261 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:49:51.206269 | orchestrator | 2025-10-08 15:49:51.206277 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2025-10-08 15:49:51.206290 | orchestrator | Wednesday 08 October 2025 15:41:32 +0000 (0:00:00.825) 0:02:29.695 ***** 2025-10-08 15:49:51.206298 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:49:51.206306 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:49:51.206314 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:49:51.206321 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.206329 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:49:51.206337 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:49:51.206344 | orchestrator | 2025-10-08 15:49:51.206352 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2025-10-08 15:49:51.206360 | orchestrator | Wednesday 08 October 2025 15:41:33 +0000 (0:00:00.666) 0:02:30.361 ***** 2025-10-08 15:49:51.206368 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:49:51.206376 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:49:51.206383 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:49:51.206391 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:49:51.206399 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:49:51.206406 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:49:51.206414 | orchestrator | 2025-10-08 15:49:51.206422 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2025-10-08 15:49:51.206430 | orchestrator | Wednesday 08 October 2025 15:41:35 +0000 (0:00:02.297) 0:02:32.658 ***** 2025-10-08 15:49:51.206438 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:49:51.206446 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:49:51.206454 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:49:51.206461 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:49:51.206469 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:49:51.206477 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:49:51.206484 | orchestrator | 2025-10-08 15:49:51.206492 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2025-10-08 15:49:51.206500 | orchestrator | Wednesday 08 October 2025 15:41:36 +0000 (0:00:00.700) 0:02:33.359 ***** 2025-10-08 15:49:51.206508 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-08 15:49:51.206518 | orchestrator | 2025-10-08 15:49:51.206526 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2025-10-08 15:49:51.206533 | orchestrator | Wednesday 08 October 2025 15:41:37 +0000 (0:00:01.531) 0:02:34.891 ***** 2025-10-08 15:49:51.206541 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:49:51.206549 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:49:51.206557 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:49:51.206565 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.206573 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:49:51.206580 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:49:51.206588 | orchestrator | 2025-10-08 15:49:51.206596 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2025-10-08 15:49:51.206604 | orchestrator | Wednesday 08 October 2025 15:41:38 +0000 (0:00:00.765) 0:02:35.656 ***** 2025-10-08 15:49:51.206612 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:49:51.206619 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:49:51.206627 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:49:51.206635 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.206642 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:49:51.206650 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:49:51.206658 | orchestrator | 2025-10-08 15:49:51.206666 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2025-10-08 15:49:51.206674 | orchestrator | Wednesday 08 October 2025 15:41:39 +0000 (0:00:01.072) 0:02:36.729 ***** 2025-10-08 15:49:51.206681 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:49:51.206689 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:49:51.206697 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:49:51.206705 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.206712 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:49:51.206725 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:49:51.206733 | orchestrator | 2025-10-08 15:49:51.206741 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2025-10-08 15:49:51.206779 | orchestrator | Wednesday 08 October 2025 15:41:40 +0000 (0:00:00.573) 0:02:37.303 ***** 2025-10-08 15:49:51.206789 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:49:51.206797 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:49:51.206805 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:49:51.206813 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.206824 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:49:51.206832 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:49:51.206840 | orchestrator | 2025-10-08 15:49:51.206848 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2025-10-08 15:49:51.206855 | orchestrator | Wednesday 08 October 2025 15:41:41 +0000 (0:00:00.781) 0:02:38.084 ***** 2025-10-08 15:49:51.206863 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:49:51.206871 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:49:51.206879 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:49:51.206886 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.206894 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:49:51.206902 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:49:51.206909 | orchestrator | 2025-10-08 15:49:51.206917 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2025-10-08 15:49:51.206925 | orchestrator | Wednesday 08 October 2025 15:41:41 +0000 (0:00:00.664) 0:02:38.749 ***** 2025-10-08 15:49:51.206933 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:49:51.206941 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:49:51.206948 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.206956 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:49:51.206964 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:49:51.206971 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:49:51.206979 | orchestrator | 2025-10-08 15:49:51.206987 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2025-10-08 15:49:51.206995 | orchestrator | Wednesday 08 October 2025 15:41:42 +0000 (0:00:00.817) 0:02:39.567 ***** 2025-10-08 15:49:51.207003 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:49:51.207010 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:49:51.207018 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:49:51.207026 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.207033 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:49:51.207041 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:49:51.207049 | orchestrator | 2025-10-08 15:49:51.207057 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2025-10-08 15:49:51.207064 | orchestrator | Wednesday 08 October 2025 15:41:43 +0000 (0:00:00.720) 0:02:40.288 ***** 2025-10-08 15:49:51.207072 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:49:51.207080 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:49:51.207087 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.207095 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:49:51.207103 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:49:51.207110 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:49:51.207118 | orchestrator | 2025-10-08 15:49:51.207126 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2025-10-08 15:49:51.207134 | orchestrator | Wednesday 08 October 2025 15:41:44 +0000 (0:00:01.005) 0:02:41.294 ***** 2025-10-08 15:49:51.207142 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:49:51.207150 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:49:51.207195 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:49:51.207203 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:49:51.207211 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:49:51.207219 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:49:51.207226 | orchestrator | 2025-10-08 15:49:51.207234 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2025-10-08 15:49:51.207248 | orchestrator | Wednesday 08 October 2025 15:41:45 +0000 (0:00:01.134) 0:02:42.428 ***** 2025-10-08 15:49:51.207256 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-08 15:49:51.207264 | orchestrator | 2025-10-08 15:49:51.207272 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2025-10-08 15:49:51.207280 | orchestrator | Wednesday 08 October 2025 15:41:46 +0000 (0:00:00.946) 0:02:43.375 ***** 2025-10-08 15:49:51.207288 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2025-10-08 15:49:51.207296 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2025-10-08 15:49:51.207304 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2025-10-08 15:49:51.207311 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2025-10-08 15:49:51.207319 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2025-10-08 15:49:51.207327 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2025-10-08 15:49:51.207335 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2025-10-08 15:49:51.207343 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2025-10-08 15:49:51.207350 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2025-10-08 15:49:51.207358 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2025-10-08 15:49:51.207366 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2025-10-08 15:49:51.207374 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2025-10-08 15:49:51.207381 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2025-10-08 15:49:51.207389 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2025-10-08 15:49:51.207397 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2025-10-08 15:49:51.207405 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2025-10-08 15:49:51.207412 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2025-10-08 15:49:51.207420 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2025-10-08 15:49:51.207428 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2025-10-08 15:49:51.207436 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2025-10-08 15:49:51.207443 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2025-10-08 15:49:51.207456 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2025-10-08 15:49:51.207465 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2025-10-08 15:49:51.207472 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2025-10-08 15:49:51.207480 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2025-10-08 15:49:51.207492 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2025-10-08 15:49:51.207499 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2025-10-08 15:49:51.207506 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2025-10-08 15:49:51.207512 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2025-10-08 15:49:51.207519 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2025-10-08 15:49:51.207526 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2025-10-08 15:49:51.207532 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2025-10-08 15:49:51.207539 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2025-10-08 15:49:51.207546 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2025-10-08 15:49:51.207552 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2025-10-08 15:49:51.207559 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2025-10-08 15:49:51.207566 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2025-10-08 15:49:51.207572 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2025-10-08 15:49:51.207584 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2025-10-08 15:49:51.207591 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2025-10-08 15:49:51.207598 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2025-10-08 15:49:51.207605 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2025-10-08 15:49:51.207611 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2025-10-08 15:49:51.207618 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2025-10-08 15:49:51.207624 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2025-10-08 15:49:51.207631 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2025-10-08 15:49:51.207638 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2025-10-08 15:49:51.207644 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2025-10-08 15:49:51.207651 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2025-10-08 15:49:51.207657 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2025-10-08 15:49:51.207664 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2025-10-08 15:49:51.207671 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2025-10-08 15:49:51.207677 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2025-10-08 15:49:51.207684 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2025-10-08 15:49:51.207690 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2025-10-08 15:49:51.207697 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2025-10-08 15:49:51.207703 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2025-10-08 15:49:51.207710 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2025-10-08 15:49:51.207717 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2025-10-08 15:49:51.207723 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2025-10-08 15:49:51.207730 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2025-10-08 15:49:51.207737 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2025-10-08 15:49:51.207743 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2025-10-08 15:49:51.207750 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2025-10-08 15:49:51.207756 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2025-10-08 15:49:51.207763 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2025-10-08 15:49:51.207770 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2025-10-08 15:49:51.207776 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2025-10-08 15:49:51.207783 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2025-10-08 15:49:51.207789 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2025-10-08 15:49:51.207796 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2025-10-08 15:49:51.207803 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2025-10-08 15:49:51.207809 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2025-10-08 15:49:51.207816 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2025-10-08 15:49:51.207823 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2025-10-08 15:49:51.207829 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2025-10-08 15:49:51.207836 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-10-08 15:49:51.207842 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2025-10-08 15:49:51.207853 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-10-08 15:49:51.207863 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-10-08 15:49:51.207870 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2025-10-08 15:49:51.207876 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-10-08 15:49:51.207886 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-10-08 15:49:51.207893 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2025-10-08 15:49:51.207900 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2025-10-08 15:49:51.207906 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2025-10-08 15:49:51.207913 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-10-08 15:49:51.207920 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2025-10-08 15:49:51.207927 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2025-10-08 15:49:51.207933 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2025-10-08 15:49:51.207940 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2025-10-08 15:49:51.207946 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2025-10-08 15:49:51.207953 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2025-10-08 15:49:51.207960 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2025-10-08 15:49:51.207966 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2025-10-08 15:49:51.207973 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2025-10-08 15:49:51.207979 | orchestrator | 2025-10-08 15:49:51.207986 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2025-10-08 15:49:51.207993 | orchestrator | Wednesday 08 October 2025 15:41:53 +0000 (0:00:06.715) 0:02:50.090 ***** 2025-10-08 15:49:51.208000 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:49:51.208006 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:49:51.208013 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:49:51.208020 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-10-08 15:49:51.208026 | orchestrator | 2025-10-08 15:49:51.208033 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2025-10-08 15:49:51.208040 | orchestrator | Wednesday 08 October 2025 15:41:53 +0000 (0:00:00.899) 0:02:50.990 ***** 2025-10-08 15:49:51.208047 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-10-08 15:49:51.208054 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-10-08 15:49:51.208060 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-10-08 15:49:51.208067 | orchestrator | 2025-10-08 15:49:51.208074 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2025-10-08 15:49:51.208080 | orchestrator | Wednesday 08 October 2025 15:41:54 +0000 (0:00:00.683) 0:02:51.674 ***** 2025-10-08 15:49:51.208087 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-10-08 15:49:51.208094 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-10-08 15:49:51.208100 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-10-08 15:49:51.208107 | orchestrator | 2025-10-08 15:49:51.208114 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2025-10-08 15:49:51.208120 | orchestrator | Wednesday 08 October 2025 15:41:56 +0000 (0:00:02.026) 0:02:53.700 ***** 2025-10-08 15:49:51.208134 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:49:51.208141 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:49:51.208147 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:49:51.208166 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:49:51.208173 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:49:51.208180 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:49:51.208186 | orchestrator | 2025-10-08 15:49:51.208193 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2025-10-08 15:49:51.208200 | orchestrator | Wednesday 08 October 2025 15:41:57 +0000 (0:00:00.766) 0:02:54.467 ***** 2025-10-08 15:49:51.208206 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:49:51.208213 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:49:51.208219 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:49:51.208226 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:49:51.208233 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:49:51.208239 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:49:51.208246 | orchestrator | 2025-10-08 15:49:51.208252 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2025-10-08 15:49:51.208259 | orchestrator | Wednesday 08 October 2025 15:41:58 +0000 (0:00:01.003) 0:02:55.471 ***** 2025-10-08 15:49:51.208265 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:49:51.208272 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:49:51.208279 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:49:51.208285 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.208292 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:49:51.208298 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:49:51.208305 | orchestrator | 2025-10-08 15:49:51.208312 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2025-10-08 15:49:51.208318 | orchestrator | Wednesday 08 October 2025 15:41:59 +0000 (0:00:00.945) 0:02:56.416 ***** 2025-10-08 15:49:51.208325 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:49:51.208331 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:49:51.208342 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:49:51.208349 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.208356 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:49:51.208362 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:49:51.208369 | orchestrator | 2025-10-08 15:49:51.208376 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2025-10-08 15:49:51.208386 | orchestrator | Wednesday 08 October 2025 15:42:00 +0000 (0:00:00.819) 0:02:57.235 ***** 2025-10-08 15:49:51.208392 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:49:51.208399 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:49:51.208406 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:49:51.208412 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.208419 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:49:51.208425 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:49:51.208432 | orchestrator | 2025-10-08 15:49:51.208439 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-10-08 15:49:51.208445 | orchestrator | Wednesday 08 October 2025 15:42:00 +0000 (0:00:00.648) 0:02:57.884 ***** 2025-10-08 15:49:51.208452 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:49:51.208459 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:49:51.208465 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:49:51.208472 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.208478 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:49:51.208485 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:49:51.208491 | orchestrator | 2025-10-08 15:49:51.208498 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-10-08 15:49:51.208505 | orchestrator | Wednesday 08 October 2025 15:42:01 +0000 (0:00:00.690) 0:02:58.575 ***** 2025-10-08 15:49:51.208511 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:49:51.208518 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:49:51.208529 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:49:51.208535 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.208542 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:49:51.208548 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:49:51.208555 | orchestrator | 2025-10-08 15:49:51.208561 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-10-08 15:49:51.208568 | orchestrator | Wednesday 08 October 2025 15:42:02 +0000 (0:00:00.565) 0:02:59.140 ***** 2025-10-08 15:49:51.208575 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:49:51.208581 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:49:51.208588 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:49:51.208594 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.208601 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:49:51.208607 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:49:51.208614 | orchestrator | 2025-10-08 15:49:51.208620 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-10-08 15:49:51.208627 | orchestrator | Wednesday 08 October 2025 15:42:02 +0000 (0:00:00.455) 0:02:59.596 ***** 2025-10-08 15:49:51.208634 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:49:51.208640 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:49:51.208647 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:49:51.208654 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:49:51.208660 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:49:51.208667 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:49:51.208674 | orchestrator | 2025-10-08 15:49:51.208680 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2025-10-08 15:49:51.208687 | orchestrator | Wednesday 08 October 2025 15:42:06 +0000 (0:00:03.487) 0:03:03.083 ***** 2025-10-08 15:49:51.208694 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:49:51.208700 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:49:51.208707 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:49:51.208714 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:49:51.208720 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:49:51.208727 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:49:51.208734 | orchestrator | 2025-10-08 15:49:51.208740 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2025-10-08 15:49:51.208747 | orchestrator | Wednesday 08 October 2025 15:42:06 +0000 (0:00:00.671) 0:03:03.755 ***** 2025-10-08 15:49:51.208754 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:49:51.208760 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:49:51.208767 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:49:51.208774 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:49:51.208780 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:49:51.208787 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:49:51.208793 | orchestrator | 2025-10-08 15:49:51.208800 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2025-10-08 15:49:51.208806 | orchestrator | Wednesday 08 October 2025 15:42:07 +0000 (0:00:01.143) 0:03:04.898 ***** 2025-10-08 15:49:51.208813 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:49:51.208820 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:49:51.208826 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:49:51.208833 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.208839 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:49:51.208846 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:49:51.208853 | orchestrator | 2025-10-08 15:49:51.208859 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2025-10-08 15:49:51.208866 | orchestrator | Wednesday 08 October 2025 15:42:08 +0000 (0:00:00.805) 0:03:05.704 ***** 2025-10-08 15:49:51.208872 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:49:51.208879 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:49:51.208886 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:49:51.208892 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-10-08 15:49:51.208904 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-10-08 15:49:51.208911 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-10-08 15:49:51.208918 | orchestrator | 2025-10-08 15:49:51.208925 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2025-10-08 15:49:51.208935 | orchestrator | Wednesday 08 October 2025 15:42:09 +0000 (0:00:01.112) 0:03:06.816 ***** 2025-10-08 15:49:51.208942 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:49:51.208949 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:49:51.208961 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2025-10-08 15:49:51.208970 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2025-10-08 15:49:51.208978 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:49:51.208985 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.208991 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2025-10-08 15:49:51.208998 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2025-10-08 15:49:51.209005 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:49:51.209012 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2025-10-08 15:49:51.209019 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2025-10-08 15:49:51.209026 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:49:51.209033 | orchestrator | 2025-10-08 15:49:51.209039 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2025-10-08 15:49:51.209046 | orchestrator | Wednesday 08 October 2025 15:42:10 +0000 (0:00:01.161) 0:03:07.978 ***** 2025-10-08 15:49:51.209053 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:49:51.209059 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:49:51.209066 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:49:51.209072 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.209079 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:49:51.209085 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:49:51.209092 | orchestrator | 2025-10-08 15:49:51.209099 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2025-10-08 15:49:51.209105 | orchestrator | Wednesday 08 October 2025 15:42:11 +0000 (0:00:00.899) 0:03:08.877 ***** 2025-10-08 15:49:51.209112 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:49:51.209118 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:49:51.209131 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:49:51.209137 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.209144 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:49:51.209162 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:49:51.209169 | orchestrator | 2025-10-08 15:49:51.209176 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-10-08 15:49:51.209183 | orchestrator | Wednesday 08 October 2025 15:42:12 +0000 (0:00:00.750) 0:03:09.628 ***** 2025-10-08 15:49:51.209189 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:49:51.209196 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:49:51.209202 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:49:51.209209 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.209215 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:49:51.209222 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:49:51.209228 | orchestrator | 2025-10-08 15:49:51.209235 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-10-08 15:49:51.209241 | orchestrator | Wednesday 08 October 2025 15:42:13 +0000 (0:00:00.884) 0:03:10.512 ***** 2025-10-08 15:49:51.209248 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:49:51.209254 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:49:51.209261 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:49:51.209267 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.209274 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:49:51.209280 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:49:51.209287 | orchestrator | 2025-10-08 15:49:51.209294 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-10-08 15:49:51.209300 | orchestrator | Wednesday 08 October 2025 15:42:14 +0000 (0:00:00.622) 0:03:11.135 ***** 2025-10-08 15:49:51.209307 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:49:51.209314 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:49:51.209320 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:49:51.209331 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.209338 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:49:51.209344 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:49:51.209351 | orchestrator | 2025-10-08 15:49:51.209358 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-10-08 15:49:51.209365 | orchestrator | Wednesday 08 October 2025 15:42:14 +0000 (0:00:00.806) 0:03:11.941 ***** 2025-10-08 15:49:51.209375 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:49:51.209381 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:49:51.209388 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:49:51.209395 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:49:51.209401 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:49:51.209408 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:49:51.209414 | orchestrator | 2025-10-08 15:49:51.209421 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-10-08 15:49:51.209428 | orchestrator | Wednesday 08 October 2025 15:42:15 +0000 (0:00:00.909) 0:03:12.851 ***** 2025-10-08 15:49:51.209434 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-10-08 15:49:51.209441 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-10-08 15:49:51.209448 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-10-08 15:49:51.209454 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:49:51.209461 | orchestrator | 2025-10-08 15:49:51.209467 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-10-08 15:49:51.209474 | orchestrator | Wednesday 08 October 2025 15:42:16 +0000 (0:00:00.563) 0:03:13.414 ***** 2025-10-08 15:49:51.209481 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-10-08 15:49:51.209488 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-10-08 15:49:51.209494 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-10-08 15:49:51.209501 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:49:51.209513 | orchestrator | 2025-10-08 15:49:51.209519 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-10-08 15:49:51.209526 | orchestrator | Wednesday 08 October 2025 15:42:17 +0000 (0:00:00.749) 0:03:14.163 ***** 2025-10-08 15:49:51.209533 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-10-08 15:49:51.209539 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-10-08 15:49:51.209546 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-10-08 15:49:51.209553 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:49:51.209559 | orchestrator | 2025-10-08 15:49:51.209566 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-10-08 15:49:51.209573 | orchestrator | Wednesday 08 October 2025 15:42:17 +0000 (0:00:00.372) 0:03:14.536 ***** 2025-10-08 15:49:51.209579 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:49:51.209586 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:49:51.209592 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:49:51.209599 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:49:51.209606 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:49:51.209612 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:49:51.209619 | orchestrator | 2025-10-08 15:49:51.209626 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-10-08 15:49:51.209632 | orchestrator | Wednesday 08 October 2025 15:42:18 +0000 (0:00:00.582) 0:03:15.119 ***** 2025-10-08 15:49:51.209639 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-10-08 15:49:51.209645 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:49:51.209652 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-10-08 15:49:51.209659 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-10-08 15:49:51.209665 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:49:51.209672 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:49:51.209678 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-10-08 15:49:51.209685 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-10-08 15:49:51.209691 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-10-08 15:49:51.209698 | orchestrator | 2025-10-08 15:49:51.209705 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2025-10-08 15:49:51.209711 | orchestrator | Wednesday 08 October 2025 15:42:20 +0000 (0:00:02.659) 0:03:17.778 ***** 2025-10-08 15:49:51.209718 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:49:51.209725 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:49:51.209731 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:49:51.209738 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:49:51.209744 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:49:51.209751 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:49:51.209757 | orchestrator | 2025-10-08 15:49:51.209764 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-10-08 15:49:51.209770 | orchestrator | Wednesday 08 October 2025 15:42:24 +0000 (0:00:04.006) 0:03:21.784 ***** 2025-10-08 15:49:51.209777 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:49:51.209784 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:49:51.209790 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:49:51.209797 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:49:51.209803 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:49:51.209810 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:49:51.209816 | orchestrator | 2025-10-08 15:49:51.209823 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-10-08 15:49:51.209830 | orchestrator | Wednesday 08 October 2025 15:42:26 +0000 (0:00:01.723) 0:03:23.508 ***** 2025-10-08 15:49:51.209836 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.209843 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:49:51.209849 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:49:51.209856 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-08 15:49:51.209868 | orchestrator | 2025-10-08 15:49:51.209875 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-10-08 15:49:51.209882 | orchestrator | Wednesday 08 October 2025 15:42:28 +0000 (0:00:01.519) 0:03:25.027 ***** 2025-10-08 15:49:51.209888 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:49:51.209895 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:49:51.209902 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:49:51.209908 | orchestrator | 2025-10-08 15:49:51.209915 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-10-08 15:49:51.209925 | orchestrator | Wednesday 08 October 2025 15:42:28 +0000 (0:00:00.515) 0:03:25.543 ***** 2025-10-08 15:49:51.209932 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:49:51.209939 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:49:51.209945 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:49:51.209952 | orchestrator | 2025-10-08 15:49:51.209962 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-10-08 15:49:51.209968 | orchestrator | Wednesday 08 October 2025 15:42:29 +0000 (0:00:01.290) 0:03:26.834 ***** 2025-10-08 15:49:51.209975 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-10-08 15:49:51.209982 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-10-08 15:49:51.209989 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-10-08 15:49:51.209995 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:49:51.210002 | orchestrator | 2025-10-08 15:49:51.210008 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-10-08 15:49:51.210084 | orchestrator | Wednesday 08 October 2025 15:42:30 +0000 (0:00:00.767) 0:03:27.601 ***** 2025-10-08 15:49:51.210094 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:49:51.210101 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:49:51.210107 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:49:51.210114 | orchestrator | 2025-10-08 15:49:51.210121 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-10-08 15:49:51.210127 | orchestrator | Wednesday 08 October 2025 15:42:31 +0000 (0:00:00.509) 0:03:28.111 ***** 2025-10-08 15:49:51.210134 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:49:51.210141 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:49:51.210147 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:49:51.210166 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-10-08 15:49:51.210173 | orchestrator | 2025-10-08 15:49:51.210180 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-10-08 15:49:51.210187 | orchestrator | Wednesday 08 October 2025 15:42:32 +0000 (0:00:01.146) 0:03:29.257 ***** 2025-10-08 15:49:51.210193 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-10-08 15:49:51.210200 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-10-08 15:49:51.210207 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-10-08 15:49:51.210214 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.210220 | orchestrator | 2025-10-08 15:49:51.210227 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-10-08 15:49:51.210234 | orchestrator | Wednesday 08 October 2025 15:42:32 +0000 (0:00:00.428) 0:03:29.686 ***** 2025-10-08 15:49:51.210240 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.210247 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:49:51.210254 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:49:51.210260 | orchestrator | 2025-10-08 15:49:51.210267 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-10-08 15:49:51.210274 | orchestrator | Wednesday 08 October 2025 15:42:33 +0000 (0:00:00.452) 0:03:30.138 ***** 2025-10-08 15:49:51.210280 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.210287 | orchestrator | 2025-10-08 15:49:51.210294 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-10-08 15:49:51.210300 | orchestrator | Wednesday 08 October 2025 15:42:33 +0000 (0:00:00.357) 0:03:30.496 ***** 2025-10-08 15:49:51.210312 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.210319 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:49:51.210326 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:49:51.210333 | orchestrator | 2025-10-08 15:49:51.210339 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-10-08 15:49:51.210346 | orchestrator | Wednesday 08 October 2025 15:42:33 +0000 (0:00:00.298) 0:03:30.794 ***** 2025-10-08 15:49:51.210353 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.210359 | orchestrator | 2025-10-08 15:49:51.210366 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-10-08 15:49:51.210373 | orchestrator | Wednesday 08 October 2025 15:42:33 +0000 (0:00:00.207) 0:03:31.001 ***** 2025-10-08 15:49:51.210379 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.210386 | orchestrator | 2025-10-08 15:49:51.210393 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-10-08 15:49:51.210399 | orchestrator | Wednesday 08 October 2025 15:42:34 +0000 (0:00:00.249) 0:03:31.251 ***** 2025-10-08 15:49:51.210406 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.210413 | orchestrator | 2025-10-08 15:49:51.210419 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-10-08 15:49:51.210426 | orchestrator | Wednesday 08 October 2025 15:42:34 +0000 (0:00:00.120) 0:03:31.371 ***** 2025-10-08 15:49:51.210432 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.210439 | orchestrator | 2025-10-08 15:49:51.210446 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-10-08 15:49:51.210452 | orchestrator | Wednesday 08 October 2025 15:42:34 +0000 (0:00:00.198) 0:03:31.570 ***** 2025-10-08 15:49:51.210459 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.210466 | orchestrator | 2025-10-08 15:49:51.210472 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-10-08 15:49:51.210479 | orchestrator | Wednesday 08 October 2025 15:42:34 +0000 (0:00:00.187) 0:03:31.757 ***** 2025-10-08 15:49:51.210486 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-10-08 15:49:51.210493 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-10-08 15:49:51.210499 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-10-08 15:49:51.210506 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.210513 | orchestrator | 2025-10-08 15:49:51.210519 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-10-08 15:49:51.210526 | orchestrator | Wednesday 08 October 2025 15:42:35 +0000 (0:00:00.566) 0:03:32.324 ***** 2025-10-08 15:49:51.210532 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.210539 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:49:51.210546 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:49:51.210552 | orchestrator | 2025-10-08 15:49:51.210582 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-10-08 15:49:51.210590 | orchestrator | Wednesday 08 October 2025 15:42:35 +0000 (0:00:00.433) 0:03:32.757 ***** 2025-10-08 15:49:51.210597 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.210604 | orchestrator | 2025-10-08 15:49:51.210610 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-10-08 15:49:51.210621 | orchestrator | Wednesday 08 October 2025 15:42:35 +0000 (0:00:00.168) 0:03:32.926 ***** 2025-10-08 15:49:51.210628 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.210634 | orchestrator | 2025-10-08 15:49:51.210641 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-10-08 15:49:51.210647 | orchestrator | Wednesday 08 October 2025 15:42:36 +0000 (0:00:00.212) 0:03:33.139 ***** 2025-10-08 15:49:51.210654 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:49:51.210661 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:49:51.210667 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:49:51.210674 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-10-08 15:49:51.210685 | orchestrator | 2025-10-08 15:49:51.210692 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-10-08 15:49:51.210699 | orchestrator | Wednesday 08 October 2025 15:42:36 +0000 (0:00:00.826) 0:03:33.965 ***** 2025-10-08 15:49:51.210705 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:49:51.210712 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:49:51.210718 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:49:51.210725 | orchestrator | 2025-10-08 15:49:51.210731 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-10-08 15:49:51.210738 | orchestrator | Wednesday 08 October 2025 15:42:37 +0000 (0:00:00.544) 0:03:34.510 ***** 2025-10-08 15:49:51.210744 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:49:51.210751 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:49:51.210758 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:49:51.210764 | orchestrator | 2025-10-08 15:49:51.210771 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-10-08 15:49:51.210777 | orchestrator | Wednesday 08 October 2025 15:42:38 +0000 (0:00:01.213) 0:03:35.723 ***** 2025-10-08 15:49:51.210784 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-10-08 15:49:51.210790 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-10-08 15:49:51.210797 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-10-08 15:49:51.210804 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.210810 | orchestrator | 2025-10-08 15:49:51.210817 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-10-08 15:49:51.210824 | orchestrator | Wednesday 08 October 2025 15:42:39 +0000 (0:00:00.693) 0:03:36.417 ***** 2025-10-08 15:49:51.210830 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:49:51.210837 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:49:51.210843 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:49:51.210850 | orchestrator | 2025-10-08 15:49:51.210856 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-10-08 15:49:51.210863 | orchestrator | Wednesday 08 October 2025 15:42:39 +0000 (0:00:00.434) 0:03:36.851 ***** 2025-10-08 15:49:51.210870 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:49:51.210876 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:49:51.210883 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:49:51.210889 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-10-08 15:49:51.210896 | orchestrator | 2025-10-08 15:49:51.210903 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-10-08 15:49:51.210909 | orchestrator | Wednesday 08 October 2025 15:42:40 +0000 (0:00:01.128) 0:03:37.980 ***** 2025-10-08 15:49:51.210916 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:49:51.210922 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:49:51.210929 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:49:51.210935 | orchestrator | 2025-10-08 15:49:51.210942 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-10-08 15:49:51.210949 | orchestrator | Wednesday 08 October 2025 15:42:41 +0000 (0:00:00.440) 0:03:38.420 ***** 2025-10-08 15:49:51.210955 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:49:51.210962 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:49:51.210968 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:49:51.210975 | orchestrator | 2025-10-08 15:49:51.210982 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-10-08 15:49:51.210988 | orchestrator | Wednesday 08 October 2025 15:42:42 +0000 (0:00:01.496) 0:03:39.917 ***** 2025-10-08 15:49:51.210995 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-10-08 15:49:51.211001 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-10-08 15:49:51.211008 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-10-08 15:49:51.211015 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.211021 | orchestrator | 2025-10-08 15:49:51.211028 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-10-08 15:49:51.211038 | orchestrator | Wednesday 08 October 2025 15:42:43 +0000 (0:00:00.523) 0:03:40.441 ***** 2025-10-08 15:49:51.211045 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:49:51.211052 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:49:51.211058 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:49:51.211065 | orchestrator | 2025-10-08 15:49:51.211071 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2025-10-08 15:49:51.211078 | orchestrator | Wednesday 08 October 2025 15:42:43 +0000 (0:00:00.399) 0:03:40.840 ***** 2025-10-08 15:49:51.211084 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:49:51.211091 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:49:51.211098 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:49:51.211104 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.211110 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:49:51.211117 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:49:51.211123 | orchestrator | 2025-10-08 15:49:51.211130 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-10-08 15:49:51.211137 | orchestrator | Wednesday 08 October 2025 15:42:44 +0000 (0:00:00.644) 0:03:41.484 ***** 2025-10-08 15:49:51.211183 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.211195 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:49:51.211205 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:49:51.211215 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-08 15:49:51.211224 | orchestrator | 2025-10-08 15:49:51.211238 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-10-08 15:49:51.211250 | orchestrator | Wednesday 08 October 2025 15:42:45 +0000 (0:00:01.046) 0:03:42.531 ***** 2025-10-08 15:49:51.211261 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:49:51.211272 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:49:51.211280 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:49:51.211287 | orchestrator | 2025-10-08 15:49:51.211294 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-10-08 15:49:51.211300 | orchestrator | Wednesday 08 October 2025 15:42:45 +0000 (0:00:00.312) 0:03:42.844 ***** 2025-10-08 15:49:51.211307 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:49:51.211313 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:49:51.211320 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:49:51.211326 | orchestrator | 2025-10-08 15:49:51.211333 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-10-08 15:49:51.211340 | orchestrator | Wednesday 08 October 2025 15:42:47 +0000 (0:00:01.356) 0:03:44.200 ***** 2025-10-08 15:49:51.211346 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-10-08 15:49:51.211353 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-10-08 15:49:51.211359 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-10-08 15:49:51.211366 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:49:51.211372 | orchestrator | 2025-10-08 15:49:51.211379 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-10-08 15:49:51.211385 | orchestrator | Wednesday 08 October 2025 15:42:47 +0000 (0:00:00.565) 0:03:44.766 ***** 2025-10-08 15:49:51.211392 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:49:51.211398 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:49:51.211405 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:49:51.211411 | orchestrator | 2025-10-08 15:49:51.211418 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2025-10-08 15:49:51.211425 | orchestrator | 2025-10-08 15:49:51.211431 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-10-08 15:49:51.211438 | orchestrator | Wednesday 08 October 2025 15:42:48 +0000 (0:00:00.546) 0:03:45.312 ***** 2025-10-08 15:49:51.211445 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-08 15:49:51.211452 | orchestrator | 2025-10-08 15:49:51.211458 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-10-08 15:49:51.211472 | orchestrator | Wednesday 08 October 2025 15:42:49 +0000 (0:00:00.755) 0:03:46.067 ***** 2025-10-08 15:49:51.211478 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-08 15:49:51.211485 | orchestrator | 2025-10-08 15:49:51.211492 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-10-08 15:49:51.211498 | orchestrator | Wednesday 08 October 2025 15:42:49 +0000 (0:00:00.504) 0:03:46.572 ***** 2025-10-08 15:49:51.211505 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:49:51.211511 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:49:51.211518 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:49:51.211524 | orchestrator | 2025-10-08 15:49:51.211531 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-10-08 15:49:51.211537 | orchestrator | Wednesday 08 October 2025 15:42:50 +0000 (0:00:00.772) 0:03:47.344 ***** 2025-10-08 15:49:51.211544 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:49:51.211550 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:49:51.211557 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:49:51.211563 | orchestrator | 2025-10-08 15:49:51.211570 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-10-08 15:49:51.211576 | orchestrator | Wednesday 08 October 2025 15:42:50 +0000 (0:00:00.570) 0:03:47.915 ***** 2025-10-08 15:49:51.211583 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:49:51.211590 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:49:51.211596 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:49:51.211603 | orchestrator | 2025-10-08 15:49:51.211609 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-10-08 15:49:51.211616 | orchestrator | Wednesday 08 October 2025 15:42:51 +0000 (0:00:00.390) 0:03:48.306 ***** 2025-10-08 15:49:51.211622 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:49:51.211629 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:49:51.211635 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:49:51.211642 | orchestrator | 2025-10-08 15:49:51.211648 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-10-08 15:49:51.211655 | orchestrator | Wednesday 08 October 2025 15:42:51 +0000 (0:00:00.301) 0:03:48.608 ***** 2025-10-08 15:49:51.211661 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:49:51.211668 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:49:51.211675 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:49:51.211681 | orchestrator | 2025-10-08 15:49:51.211688 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-10-08 15:49:51.211694 | orchestrator | Wednesday 08 October 2025 15:42:52 +0000 (0:00:00.822) 0:03:49.430 ***** 2025-10-08 15:49:51.211701 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:49:51.211707 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:49:51.211714 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:49:51.211721 | orchestrator | 2025-10-08 15:49:51.211727 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-10-08 15:49:51.211734 | orchestrator | Wednesday 08 October 2025 15:42:53 +0000 (0:00:00.624) 0:03:50.054 ***** 2025-10-08 15:49:51.211740 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:49:51.211747 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:49:51.211754 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:49:51.211760 | orchestrator | 2025-10-08 15:49:51.211767 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-10-08 15:49:51.211796 | orchestrator | Wednesday 08 October 2025 15:42:53 +0000 (0:00:00.347) 0:03:50.402 ***** 2025-10-08 15:49:51.211804 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:49:51.211810 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:49:51.211817 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:49:51.211824 | orchestrator | 2025-10-08 15:49:51.211831 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-10-08 15:49:51.211841 | orchestrator | Wednesday 08 October 2025 15:42:54 +0000 (0:00:00.987) 0:03:51.390 ***** 2025-10-08 15:49:51.211853 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:49:51.211860 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:49:51.211867 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:49:51.211873 | orchestrator | 2025-10-08 15:49:51.211881 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-10-08 15:49:51.211887 | orchestrator | Wednesday 08 October 2025 15:42:55 +0000 (0:00:00.766) 0:03:52.156 ***** 2025-10-08 15:49:51.211894 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:49:51.211901 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:49:51.211908 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:49:51.211914 | orchestrator | 2025-10-08 15:49:51.211921 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-10-08 15:49:51.211928 | orchestrator | Wednesday 08 October 2025 15:42:55 +0000 (0:00:00.644) 0:03:52.800 ***** 2025-10-08 15:49:51.211935 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:49:51.211941 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:49:51.211948 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:49:51.211954 | orchestrator | 2025-10-08 15:49:51.211961 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-10-08 15:49:51.211968 | orchestrator | Wednesday 08 October 2025 15:42:56 +0000 (0:00:00.363) 0:03:53.164 ***** 2025-10-08 15:49:51.211974 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:49:51.211981 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:49:51.211988 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:49:51.211995 | orchestrator | 2025-10-08 15:49:51.212001 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-10-08 15:49:51.212008 | orchestrator | Wednesday 08 October 2025 15:42:56 +0000 (0:00:00.364) 0:03:53.528 ***** 2025-10-08 15:49:51.212014 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:49:51.212021 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:49:51.212028 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:49:51.212034 | orchestrator | 2025-10-08 15:49:51.212041 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-10-08 15:49:51.212048 | orchestrator | Wednesday 08 October 2025 15:42:56 +0000 (0:00:00.314) 0:03:53.843 ***** 2025-10-08 15:49:51.212054 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:49:51.212061 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:49:51.212068 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:49:51.212074 | orchestrator | 2025-10-08 15:49:51.212081 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-10-08 15:49:51.212088 | orchestrator | Wednesday 08 October 2025 15:42:57 +0000 (0:00:00.297) 0:03:54.140 ***** 2025-10-08 15:49:51.212094 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:49:51.212101 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:49:51.212108 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:49:51.212114 | orchestrator | 2025-10-08 15:49:51.212121 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-10-08 15:49:51.212128 | orchestrator | Wednesday 08 October 2025 15:42:57 +0000 (0:00:00.629) 0:03:54.770 ***** 2025-10-08 15:49:51.212134 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:49:51.212141 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:49:51.212147 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:49:51.212189 | orchestrator | 2025-10-08 15:49:51.212196 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-10-08 15:49:51.212203 | orchestrator | Wednesday 08 October 2025 15:42:58 +0000 (0:00:00.316) 0:03:55.086 ***** 2025-10-08 15:49:51.212210 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:49:51.212217 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:49:51.212223 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:49:51.212230 | orchestrator | 2025-10-08 15:49:51.212236 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-10-08 15:49:51.212243 | orchestrator | Wednesday 08 October 2025 15:42:58 +0000 (0:00:00.376) 0:03:55.463 ***** 2025-10-08 15:49:51.212250 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:49:51.212261 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:49:51.212268 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:49:51.212274 | orchestrator | 2025-10-08 15:49:51.212281 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-10-08 15:49:51.212288 | orchestrator | Wednesday 08 October 2025 15:42:58 +0000 (0:00:00.364) 0:03:55.827 ***** 2025-10-08 15:49:51.212294 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:49:51.212301 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:49:51.212308 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:49:51.212314 | orchestrator | 2025-10-08 15:49:51.212321 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2025-10-08 15:49:51.212327 | orchestrator | Wednesday 08 October 2025 15:42:59 +0000 (0:00:00.789) 0:03:56.617 ***** 2025-10-08 15:49:51.212334 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:49:51.212341 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:49:51.212347 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:49:51.212354 | orchestrator | 2025-10-08 15:49:51.212361 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2025-10-08 15:49:51.212367 | orchestrator | Wednesday 08 October 2025 15:42:59 +0000 (0:00:00.340) 0:03:56.957 ***** 2025-10-08 15:49:51.212374 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-08 15:49:51.212381 | orchestrator | 2025-10-08 15:49:51.212387 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2025-10-08 15:49:51.212394 | orchestrator | Wednesday 08 October 2025 15:43:00 +0000 (0:00:00.842) 0:03:57.799 ***** 2025-10-08 15:49:51.212400 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:49:51.212407 | orchestrator | 2025-10-08 15:49:51.212414 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2025-10-08 15:49:51.212420 | orchestrator | Wednesday 08 October 2025 15:43:00 +0000 (0:00:00.166) 0:03:57.966 ***** 2025-10-08 15:49:51.212427 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-10-08 15:49:51.212434 | orchestrator | 2025-10-08 15:49:51.212461 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2025-10-08 15:49:51.212469 | orchestrator | Wednesday 08 October 2025 15:43:02 +0000 (0:00:01.116) 0:03:59.083 ***** 2025-10-08 15:49:51.212476 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:49:51.212483 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:49:51.212489 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:49:51.212496 | orchestrator | 2025-10-08 15:49:51.212509 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2025-10-08 15:49:51.212516 | orchestrator | Wednesday 08 October 2025 15:43:02 +0000 (0:00:00.378) 0:03:59.461 ***** 2025-10-08 15:49:51.212522 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:49:51.212529 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:49:51.212536 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:49:51.212542 | orchestrator | 2025-10-08 15:49:51.212549 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2025-10-08 15:49:51.212555 | orchestrator | Wednesday 08 October 2025 15:43:02 +0000 (0:00:00.351) 0:03:59.812 ***** 2025-10-08 15:49:51.212562 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:49:51.212569 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:49:51.212575 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:49:51.212582 | orchestrator | 2025-10-08 15:49:51.212588 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2025-10-08 15:49:51.212595 | orchestrator | Wednesday 08 October 2025 15:43:04 +0000 (0:00:01.440) 0:04:01.253 ***** 2025-10-08 15:49:51.212601 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:49:51.212608 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:49:51.212614 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:49:51.212621 | orchestrator | 2025-10-08 15:49:51.212628 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2025-10-08 15:49:51.212634 | orchestrator | Wednesday 08 October 2025 15:43:05 +0000 (0:00:00.769) 0:04:02.022 ***** 2025-10-08 15:49:51.212641 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:49:51.212651 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:49:51.212657 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:49:51.212664 | orchestrator | 2025-10-08 15:49:51.212671 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2025-10-08 15:49:51.212677 | orchestrator | Wednesday 08 October 2025 15:43:05 +0000 (0:00:00.757) 0:04:02.780 ***** 2025-10-08 15:49:51.212684 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:49:51.212691 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:49:51.212697 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:49:51.212704 | orchestrator | 2025-10-08 15:49:51.212710 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2025-10-08 15:49:51.212717 | orchestrator | Wednesday 08 October 2025 15:43:06 +0000 (0:00:00.667) 0:04:03.447 ***** 2025-10-08 15:49:51.212723 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:49:51.212729 | orchestrator | 2025-10-08 15:49:51.212735 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2025-10-08 15:49:51.212741 | orchestrator | Wednesday 08 October 2025 15:43:07 +0000 (0:00:01.241) 0:04:04.689 ***** 2025-10-08 15:49:51.212747 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:49:51.212753 | orchestrator | 2025-10-08 15:49:51.212759 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2025-10-08 15:49:51.212765 | orchestrator | Wednesday 08 October 2025 15:43:08 +0000 (0:00:00.731) 0:04:05.420 ***** 2025-10-08 15:49:51.212771 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-10-08 15:49:51.212777 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-10-08 15:49:51.212783 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-10-08 15:49:51.212790 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-10-08 15:49:51.212796 | orchestrator | ok: [testbed-node-1] => (item=None) 2025-10-08 15:49:51.212802 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-10-08 15:49:51.212808 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-10-08 15:49:51.212814 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2025-10-08 15:49:51.212820 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-10-08 15:49:51.212826 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2025-10-08 15:49:51.212832 | orchestrator | ok: [testbed-node-2] => (item=None) 2025-10-08 15:49:51.212838 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2025-10-08 15:49:51.212844 | orchestrator | 2025-10-08 15:49:51.212851 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2025-10-08 15:49:51.212857 | orchestrator | Wednesday 08 October 2025 15:43:11 +0000 (0:00:03.485) 0:04:08.906 ***** 2025-10-08 15:49:51.212863 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:49:51.212869 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:49:51.212875 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:49:51.212881 | orchestrator | 2025-10-08 15:49:51.212887 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2025-10-08 15:49:51.212893 | orchestrator | Wednesday 08 October 2025 15:43:13 +0000 (0:00:01.554) 0:04:10.461 ***** 2025-10-08 15:49:51.212899 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:49:51.212906 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:49:51.212912 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:49:51.212918 | orchestrator | 2025-10-08 15:49:51.212924 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2025-10-08 15:49:51.212930 | orchestrator | Wednesday 08 October 2025 15:43:13 +0000 (0:00:00.351) 0:04:10.812 ***** 2025-10-08 15:49:51.212936 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:49:51.212942 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:49:51.212948 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:49:51.212954 | orchestrator | 2025-10-08 15:49:51.212960 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2025-10-08 15:49:51.212966 | orchestrator | Wednesday 08 October 2025 15:43:14 +0000 (0:00:00.304) 0:04:11.117 ***** 2025-10-08 15:49:51.212977 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:49:51.212983 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:49:51.212989 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:49:51.212995 | orchestrator | 2025-10-08 15:49:51.213001 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2025-10-08 15:49:51.213024 | orchestrator | Wednesday 08 October 2025 15:43:16 +0000 (0:00:02.428) 0:04:13.545 ***** 2025-10-08 15:49:51.213031 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:49:51.213038 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:49:51.213044 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:49:51.213050 | orchestrator | 2025-10-08 15:49:51.213056 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2025-10-08 15:49:51.213066 | orchestrator | Wednesday 08 October 2025 15:43:17 +0000 (0:00:01.362) 0:04:14.907 ***** 2025-10-08 15:49:51.213072 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:49:51.213078 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:49:51.213084 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:49:51.213090 | orchestrator | 2025-10-08 15:49:51.213096 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2025-10-08 15:49:51.213102 | orchestrator | Wednesday 08 October 2025 15:43:18 +0000 (0:00:00.310) 0:04:15.218 ***** 2025-10-08 15:49:51.213109 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-08 15:49:51.213115 | orchestrator | 2025-10-08 15:49:51.213121 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2025-10-08 15:49:51.213127 | orchestrator | Wednesday 08 October 2025 15:43:18 +0000 (0:00:00.540) 0:04:15.758 ***** 2025-10-08 15:49:51.213133 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:49:51.213139 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:49:51.213145 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:49:51.213162 | orchestrator | 2025-10-08 15:49:51.213169 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2025-10-08 15:49:51.213175 | orchestrator | Wednesday 08 October 2025 15:43:19 +0000 (0:00:00.557) 0:04:16.315 ***** 2025-10-08 15:49:51.213181 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:49:51.213188 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:49:51.213194 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:49:51.213200 | orchestrator | 2025-10-08 15:49:51.213206 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2025-10-08 15:49:51.213212 | orchestrator | Wednesday 08 October 2025 15:43:19 +0000 (0:00:00.307) 0:04:16.623 ***** 2025-10-08 15:49:51.213218 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-08 15:49:51.213224 | orchestrator | 2025-10-08 15:49:51.213230 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2025-10-08 15:49:51.213237 | orchestrator | Wednesday 08 October 2025 15:43:20 +0000 (0:00:00.509) 0:04:17.132 ***** 2025-10-08 15:49:51.213243 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:49:51.213249 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:49:51.213255 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:49:51.213261 | orchestrator | 2025-10-08 15:49:51.213267 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2025-10-08 15:49:51.213273 | orchestrator | Wednesday 08 October 2025 15:43:22 +0000 (0:00:01.907) 0:04:19.039 ***** 2025-10-08 15:49:51.213279 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:49:51.213285 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:49:51.213292 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:49:51.213298 | orchestrator | 2025-10-08 15:49:51.213304 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2025-10-08 15:49:51.213310 | orchestrator | Wednesday 08 October 2025 15:43:23 +0000 (0:00:01.403) 0:04:20.443 ***** 2025-10-08 15:49:51.213316 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:49:51.213326 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:49:51.213332 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:49:51.213339 | orchestrator | 2025-10-08 15:49:51.213345 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2025-10-08 15:49:51.213351 | orchestrator | Wednesday 08 October 2025 15:43:25 +0000 (0:00:01.944) 0:04:22.387 ***** 2025-10-08 15:49:51.213357 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:49:51.213363 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:49:51.213369 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:49:51.213375 | orchestrator | 2025-10-08 15:49:51.213381 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2025-10-08 15:49:51.213387 | orchestrator | Wednesday 08 October 2025 15:43:27 +0000 (0:00:02.255) 0:04:24.642 ***** 2025-10-08 15:49:51.213393 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-08 15:49:51.213400 | orchestrator | 2025-10-08 15:49:51.213406 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2025-10-08 15:49:51.213412 | orchestrator | Wednesday 08 October 2025 15:43:28 +0000 (0:00:00.753) 0:04:25.396 ***** 2025-10-08 15:49:51.213418 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:49:51.213424 | orchestrator | 2025-10-08 15:49:51.213430 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2025-10-08 15:49:51.213436 | orchestrator | Wednesday 08 October 2025 15:43:29 +0000 (0:00:01.087) 0:04:26.484 ***** 2025-10-08 15:49:51.213442 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:49:51.213448 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:49:51.213455 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:49:51.213461 | orchestrator | 2025-10-08 15:49:51.213467 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2025-10-08 15:49:51.213473 | orchestrator | Wednesday 08 October 2025 15:43:38 +0000 (0:00:09.171) 0:04:35.655 ***** 2025-10-08 15:49:51.213479 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:49:51.213485 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:49:51.213491 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:49:51.213497 | orchestrator | 2025-10-08 15:49:51.213503 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2025-10-08 15:49:51.213510 | orchestrator | Wednesday 08 October 2025 15:43:39 +0000 (0:00:00.361) 0:04:36.016 ***** 2025-10-08 15:49:51.213534 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__e1fae0f0bf89f5b4755867603d5fb839f0e07fbc'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2025-10-08 15:49:51.213546 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__e1fae0f0bf89f5b4755867603d5fb839f0e07fbc'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2025-10-08 15:49:51.213554 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__e1fae0f0bf89f5b4755867603d5fb839f0e07fbc'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2025-10-08 15:49:51.213561 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__e1fae0f0bf89f5b4755867603d5fb839f0e07fbc'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2025-10-08 15:49:51.213568 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__e1fae0f0bf89f5b4755867603d5fb839f0e07fbc'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2025-10-08 15:49:51.213579 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__e1fae0f0bf89f5b4755867603d5fb839f0e07fbc'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__e1fae0f0bf89f5b4755867603d5fb839f0e07fbc'}])  2025-10-08 15:49:51.213587 | orchestrator | 2025-10-08 15:49:51.213594 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-10-08 15:49:51.213600 | orchestrator | Wednesday 08 October 2025 15:43:54 +0000 (0:00:15.328) 0:04:51.344 ***** 2025-10-08 15:49:51.213606 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:49:51.213612 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:49:51.213618 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:49:51.213625 | orchestrator | 2025-10-08 15:49:51.213631 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-10-08 15:49:51.213637 | orchestrator | Wednesday 08 October 2025 15:43:54 +0000 (0:00:00.476) 0:04:51.821 ***** 2025-10-08 15:49:51.213643 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-08 15:49:51.213649 | orchestrator | 2025-10-08 15:49:51.213655 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-10-08 15:49:51.213661 | orchestrator | Wednesday 08 October 2025 15:43:55 +0000 (0:00:00.846) 0:04:52.668 ***** 2025-10-08 15:49:51.213667 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:49:51.213673 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:49:51.213680 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:49:51.213686 | orchestrator | 2025-10-08 15:49:51.213692 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-10-08 15:49:51.213698 | orchestrator | Wednesday 08 October 2025 15:43:56 +0000 (0:00:00.348) 0:04:53.016 ***** 2025-10-08 15:49:51.213704 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:49:51.213710 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:49:51.213716 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:49:51.213722 | orchestrator | 2025-10-08 15:49:51.213728 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-10-08 15:49:51.213734 | orchestrator | Wednesday 08 October 2025 15:43:56 +0000 (0:00:00.406) 0:04:53.422 ***** 2025-10-08 15:49:51.213740 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-10-08 15:49:51.213747 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-10-08 15:49:51.213753 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-10-08 15:49:51.213759 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:49:51.213765 | orchestrator | 2025-10-08 15:49:51.213771 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-10-08 15:49:51.213777 | orchestrator | Wednesday 08 October 2025 15:43:57 +0000 (0:00:00.887) 0:04:54.310 ***** 2025-10-08 15:49:51.213783 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:49:51.213790 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:49:51.213796 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:49:51.213802 | orchestrator | 2025-10-08 15:49:51.213808 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2025-10-08 15:49:51.213814 | orchestrator | 2025-10-08 15:49:51.213820 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-10-08 15:49:51.213826 | orchestrator | Wednesday 08 October 2025 15:43:58 +0000 (0:00:00.854) 0:04:55.165 ***** 2025-10-08 15:49:51.213850 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-08 15:49:51.213862 | orchestrator | 2025-10-08 15:49:51.213868 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-10-08 15:49:51.213878 | orchestrator | Wednesday 08 October 2025 15:43:58 +0000 (0:00:00.518) 0:04:55.683 ***** 2025-10-08 15:49:51.213884 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-08 15:49:51.213890 | orchestrator | 2025-10-08 15:49:51.213896 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-10-08 15:49:51.213902 | orchestrator | Wednesday 08 October 2025 15:43:59 +0000 (0:00:00.776) 0:04:56.460 ***** 2025-10-08 15:49:51.213909 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:49:51.213915 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:49:51.213921 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:49:51.213927 | orchestrator | 2025-10-08 15:49:51.213933 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-10-08 15:49:51.213939 | orchestrator | Wednesday 08 October 2025 15:44:00 +0000 (0:00:00.701) 0:04:57.162 ***** 2025-10-08 15:49:51.213945 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:49:51.213952 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:49:51.213958 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:49:51.213964 | orchestrator | 2025-10-08 15:49:51.213970 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-10-08 15:49:51.213976 | orchestrator | Wednesday 08 October 2025 15:44:00 +0000 (0:00:00.343) 0:04:57.505 ***** 2025-10-08 15:49:51.213982 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:49:51.213988 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:49:51.213994 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:49:51.214001 | orchestrator | 2025-10-08 15:49:51.214007 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-10-08 15:49:51.214029 | orchestrator | Wednesday 08 October 2025 15:44:00 +0000 (0:00:00.316) 0:04:57.822 ***** 2025-10-08 15:49:51.214036 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:49:51.214042 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:49:51.214049 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:49:51.214056 | orchestrator | 2025-10-08 15:49:51.214062 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-10-08 15:49:51.214069 | orchestrator | Wednesday 08 October 2025 15:44:01 +0000 (0:00:00.627) 0:04:58.449 ***** 2025-10-08 15:49:51.214075 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:49:51.214081 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:49:51.214087 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:49:51.214093 | orchestrator | 2025-10-08 15:49:51.214100 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-10-08 15:49:51.214106 | orchestrator | Wednesday 08 October 2025 15:44:02 +0000 (0:00:00.709) 0:04:59.159 ***** 2025-10-08 15:49:51.214112 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:49:51.214118 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:49:51.214124 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:49:51.214130 | orchestrator | 2025-10-08 15:49:51.214136 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-10-08 15:49:51.214143 | orchestrator | Wednesday 08 October 2025 15:44:02 +0000 (0:00:00.345) 0:04:59.504 ***** 2025-10-08 15:49:51.214149 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:49:51.214167 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:49:51.214174 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:49:51.214180 | orchestrator | 2025-10-08 15:49:51.214186 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-10-08 15:49:51.214192 | orchestrator | Wednesday 08 October 2025 15:44:02 +0000 (0:00:00.338) 0:04:59.842 ***** 2025-10-08 15:49:51.214198 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:49:51.214204 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:49:51.214210 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:49:51.214216 | orchestrator | 2025-10-08 15:49:51.214223 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-10-08 15:49:51.214234 | orchestrator | Wednesday 08 October 2025 15:44:03 +0000 (0:00:01.010) 0:05:00.853 ***** 2025-10-08 15:49:51.214240 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:49:51.214246 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:49:51.214252 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:49:51.214258 | orchestrator | 2025-10-08 15:49:51.214265 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-10-08 15:49:51.214271 | orchestrator | Wednesday 08 October 2025 15:44:04 +0000 (0:00:00.739) 0:05:01.592 ***** 2025-10-08 15:49:51.214277 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:49:51.214283 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:49:51.214289 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:49:51.214295 | orchestrator | 2025-10-08 15:49:51.214301 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-10-08 15:49:51.214308 | orchestrator | Wednesday 08 October 2025 15:44:04 +0000 (0:00:00.357) 0:05:01.950 ***** 2025-10-08 15:49:51.214314 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:49:51.214320 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:49:51.214326 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:49:51.214332 | orchestrator | 2025-10-08 15:49:51.214338 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-10-08 15:49:51.214344 | orchestrator | Wednesday 08 October 2025 15:44:05 +0000 (0:00:00.357) 0:05:02.307 ***** 2025-10-08 15:49:51.214351 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:49:51.214357 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:49:51.214363 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:49:51.214369 | orchestrator | 2025-10-08 15:49:51.214375 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-10-08 15:49:51.214382 | orchestrator | Wednesday 08 October 2025 15:44:05 +0000 (0:00:00.640) 0:05:02.947 ***** 2025-10-08 15:49:51.214388 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:49:51.214394 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:49:51.214400 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:49:51.214406 | orchestrator | 2025-10-08 15:49:51.214412 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-10-08 15:49:51.214418 | orchestrator | Wednesday 08 October 2025 15:44:06 +0000 (0:00:00.342) 0:05:03.290 ***** 2025-10-08 15:49:51.214444 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:49:51.214452 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:49:51.214458 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:49:51.214464 | orchestrator | 2025-10-08 15:49:51.214470 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-10-08 15:49:51.214476 | orchestrator | Wednesday 08 October 2025 15:44:06 +0000 (0:00:00.392) 0:05:03.682 ***** 2025-10-08 15:49:51.214486 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:49:51.214492 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:49:51.214499 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:49:51.214505 | orchestrator | 2025-10-08 15:49:51.214511 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-10-08 15:49:51.214517 | orchestrator | Wednesday 08 October 2025 15:44:07 +0000 (0:00:00.323) 0:05:04.006 ***** 2025-10-08 15:49:51.214523 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:49:51.214529 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:49:51.214535 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:49:51.214541 | orchestrator | 2025-10-08 15:49:51.214548 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-10-08 15:49:51.214554 | orchestrator | Wednesday 08 October 2025 15:44:07 +0000 (0:00:00.326) 0:05:04.332 ***** 2025-10-08 15:49:51.214560 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:49:51.214566 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:49:51.214572 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:49:51.214578 | orchestrator | 2025-10-08 15:49:51.214584 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-10-08 15:49:51.214590 | orchestrator | Wednesday 08 October 2025 15:44:07 +0000 (0:00:00.664) 0:05:04.996 ***** 2025-10-08 15:49:51.214601 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:49:51.214607 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:49:51.214613 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:49:51.214619 | orchestrator | 2025-10-08 15:49:51.214625 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-10-08 15:49:51.214632 | orchestrator | Wednesday 08 October 2025 15:44:08 +0000 (0:00:00.336) 0:05:05.333 ***** 2025-10-08 15:49:51.214638 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:49:51.214644 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:49:51.214650 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:49:51.214656 | orchestrator | 2025-10-08 15:49:51.214662 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2025-10-08 15:49:51.214668 | orchestrator | Wednesday 08 October 2025 15:44:08 +0000 (0:00:00.562) 0:05:05.896 ***** 2025-10-08 15:49:51.214674 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-10-08 15:49:51.214681 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-10-08 15:49:51.214687 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-10-08 15:49:51.214693 | orchestrator | 2025-10-08 15:49:51.214699 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2025-10-08 15:49:51.214705 | orchestrator | Wednesday 08 October 2025 15:44:09 +0000 (0:00:00.943) 0:05:06.840 ***** 2025-10-08 15:49:51.214711 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-08 15:49:51.214718 | orchestrator | 2025-10-08 15:49:51.214724 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2025-10-08 15:49:51.214730 | orchestrator | Wednesday 08 October 2025 15:44:10 +0000 (0:00:00.815) 0:05:07.655 ***** 2025-10-08 15:49:51.214736 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:49:51.214742 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:49:51.214748 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:49:51.214754 | orchestrator | 2025-10-08 15:49:51.214760 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2025-10-08 15:49:51.214766 | orchestrator | Wednesday 08 October 2025 15:44:11 +0000 (0:00:00.772) 0:05:08.427 ***** 2025-10-08 15:49:51.214773 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:49:51.214779 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:49:51.214785 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:49:51.214791 | orchestrator | 2025-10-08 15:49:51.214797 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2025-10-08 15:49:51.214803 | orchestrator | Wednesday 08 October 2025 15:44:11 +0000 (0:00:00.332) 0:05:08.760 ***** 2025-10-08 15:49:51.214809 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-10-08 15:49:51.214815 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-10-08 15:49:51.214822 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-10-08 15:49:51.214828 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2025-10-08 15:49:51.214834 | orchestrator | 2025-10-08 15:49:51.214840 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2025-10-08 15:49:51.214846 | orchestrator | Wednesday 08 October 2025 15:44:22 +0000 (0:00:10.438) 0:05:19.199 ***** 2025-10-08 15:49:51.214852 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:49:51.214858 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:49:51.214865 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:49:51.214871 | orchestrator | 2025-10-08 15:49:51.214877 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2025-10-08 15:49:51.214883 | orchestrator | Wednesday 08 October 2025 15:44:22 +0000 (0:00:00.353) 0:05:19.552 ***** 2025-10-08 15:49:51.214889 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-10-08 15:49:51.214895 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-10-08 15:49:51.214901 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-10-08 15:49:51.214912 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-10-08 15:49:51.214918 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-10-08 15:49:51.214924 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-10-08 15:49:51.214930 | orchestrator | 2025-10-08 15:49:51.214936 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2025-10-08 15:49:51.214943 | orchestrator | Wednesday 08 October 2025 15:44:25 +0000 (0:00:02.977) 0:05:22.529 ***** 2025-10-08 15:49:51.214949 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-10-08 15:49:51.214955 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-10-08 15:49:51.214978 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-10-08 15:49:51.214985 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-10-08 15:49:51.214991 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-10-08 15:49:51.214997 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-10-08 15:49:51.215004 | orchestrator | 2025-10-08 15:49:51.215013 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2025-10-08 15:49:51.215019 | orchestrator | Wednesday 08 October 2025 15:44:26 +0000 (0:00:01.211) 0:05:23.741 ***** 2025-10-08 15:49:51.215025 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:49:51.215032 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:49:51.215038 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:49:51.215044 | orchestrator | 2025-10-08 15:49:51.215050 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2025-10-08 15:49:51.215056 | orchestrator | Wednesday 08 October 2025 15:44:27 +0000 (0:00:00.799) 0:05:24.540 ***** 2025-10-08 15:49:51.215062 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:49:51.215069 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:49:51.215075 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:49:51.215081 | orchestrator | 2025-10-08 15:49:51.215087 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2025-10-08 15:49:51.215093 | orchestrator | Wednesday 08 October 2025 15:44:28 +0000 (0:00:00.601) 0:05:25.142 ***** 2025-10-08 15:49:51.215099 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:49:51.215105 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:49:51.215112 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:49:51.215118 | orchestrator | 2025-10-08 15:49:51.215124 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2025-10-08 15:49:51.215130 | orchestrator | Wednesday 08 October 2025 15:44:28 +0000 (0:00:00.333) 0:05:25.476 ***** 2025-10-08 15:49:51.215136 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-08 15:49:51.215143 | orchestrator | 2025-10-08 15:49:51.215149 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2025-10-08 15:49:51.215166 | orchestrator | Wednesday 08 October 2025 15:44:29 +0000 (0:00:00.609) 0:05:26.085 ***** 2025-10-08 15:49:51.215172 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:49:51.215179 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:49:51.215185 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:49:51.215191 | orchestrator | 2025-10-08 15:49:51.215197 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2025-10-08 15:49:51.215203 | orchestrator | Wednesday 08 October 2025 15:44:29 +0000 (0:00:00.595) 0:05:26.680 ***** 2025-10-08 15:49:51.215210 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:49:51.215216 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:49:51.215222 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:49:51.215228 | orchestrator | 2025-10-08 15:49:51.215234 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2025-10-08 15:49:51.215240 | orchestrator | Wednesday 08 October 2025 15:44:30 +0000 (0:00:00.329) 0:05:27.010 ***** 2025-10-08 15:49:51.215246 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-1, testbed-node-0, testbed-node-2 2025-10-08 15:49:51.215253 | orchestrator | 2025-10-08 15:49:51.215263 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2025-10-08 15:49:51.215270 | orchestrator | Wednesday 08 October 2025 15:44:30 +0000 (0:00:00.552) 0:05:27.562 ***** 2025-10-08 15:49:51.215276 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:49:51.215282 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:49:51.215288 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:49:51.215294 | orchestrator | 2025-10-08 15:49:51.215300 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2025-10-08 15:49:51.215307 | orchestrator | Wednesday 08 October 2025 15:44:32 +0000 (0:00:01.538) 0:05:29.101 ***** 2025-10-08 15:49:51.215313 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:49:51.215319 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:49:51.215325 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:49:51.215331 | orchestrator | 2025-10-08 15:49:51.215337 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2025-10-08 15:49:51.215343 | orchestrator | Wednesday 08 October 2025 15:44:33 +0000 (0:00:01.092) 0:05:30.194 ***** 2025-10-08 15:49:51.215349 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:49:51.215355 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:49:51.215361 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:49:51.215367 | orchestrator | 2025-10-08 15:49:51.215374 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2025-10-08 15:49:51.215380 | orchestrator | Wednesday 08 October 2025 15:44:34 +0000 (0:00:01.681) 0:05:31.876 ***** 2025-10-08 15:49:51.215386 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:49:51.215392 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:49:51.215398 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:49:51.215404 | orchestrator | 2025-10-08 15:49:51.215411 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2025-10-08 15:49:51.215417 | orchestrator | Wednesday 08 October 2025 15:44:37 +0000 (0:00:02.758) 0:05:34.635 ***** 2025-10-08 15:49:51.215423 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:49:51.215429 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:49:51.215435 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2025-10-08 15:49:51.215442 | orchestrator | 2025-10-08 15:49:51.215448 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2025-10-08 15:49:51.215454 | orchestrator | Wednesday 08 October 2025 15:44:38 +0000 (0:00:00.706) 0:05:35.341 ***** 2025-10-08 15:49:51.215461 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2025-10-08 15:49:51.215468 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2025-10-08 15:49:51.215474 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2025-10-08 15:49:51.215498 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2025-10-08 15:49:51.215505 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-10-08 15:49:51.215511 | orchestrator | 2025-10-08 15:49:51.215518 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2025-10-08 15:49:51.215529 | orchestrator | Wednesday 08 October 2025 15:45:02 +0000 (0:00:24.363) 0:05:59.705 ***** 2025-10-08 15:49:51.215535 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-10-08 15:49:51.215542 | orchestrator | 2025-10-08 15:49:51.215548 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2025-10-08 15:49:51.215554 | orchestrator | Wednesday 08 October 2025 15:45:03 +0000 (0:00:01.241) 0:06:00.946 ***** 2025-10-08 15:49:51.215560 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:49:51.215566 | orchestrator | 2025-10-08 15:49:51.215572 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2025-10-08 15:49:51.215578 | orchestrator | Wednesday 08 October 2025 15:45:04 +0000 (0:00:00.310) 0:06:01.256 ***** 2025-10-08 15:49:51.215589 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:49:51.215595 | orchestrator | 2025-10-08 15:49:51.215601 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2025-10-08 15:49:51.215607 | orchestrator | Wednesday 08 October 2025 15:45:04 +0000 (0:00:00.140) 0:06:01.396 ***** 2025-10-08 15:49:51.215613 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2025-10-08 15:49:51.215619 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2025-10-08 15:49:51.215625 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2025-10-08 15:49:51.215631 | orchestrator | 2025-10-08 15:49:51.215637 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2025-10-08 15:49:51.215643 | orchestrator | Wednesday 08 October 2025 15:45:10 +0000 (0:00:06.588) 0:06:07.985 ***** 2025-10-08 15:49:51.215650 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2025-10-08 15:49:51.215656 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2025-10-08 15:49:51.215662 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2025-10-08 15:49:51.215668 | orchestrator | skipping: [testbed-node-2] => (item=status)  2025-10-08 15:49:51.215674 | orchestrator | 2025-10-08 15:49:51.215680 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-10-08 15:49:51.215687 | orchestrator | Wednesday 08 October 2025 15:45:16 +0000 (0:00:05.070) 0:06:13.055 ***** 2025-10-08 15:49:51.215693 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:49:51.215699 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:49:51.215705 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:49:51.215711 | orchestrator | 2025-10-08 15:49:51.215717 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-10-08 15:49:51.215723 | orchestrator | Wednesday 08 October 2025 15:45:16 +0000 (0:00:00.732) 0:06:13.788 ***** 2025-10-08 15:49:51.215729 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-08 15:49:51.215735 | orchestrator | 2025-10-08 15:49:51.215741 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-10-08 15:49:51.215747 | orchestrator | Wednesday 08 October 2025 15:45:17 +0000 (0:00:00.601) 0:06:14.389 ***** 2025-10-08 15:49:51.215753 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:49:51.215760 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:49:51.215766 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:49:51.215772 | orchestrator | 2025-10-08 15:49:51.215778 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-10-08 15:49:51.215784 | orchestrator | Wednesday 08 October 2025 15:45:17 +0000 (0:00:00.578) 0:06:14.967 ***** 2025-10-08 15:49:51.215790 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:49:51.215797 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:49:51.215803 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:49:51.215809 | orchestrator | 2025-10-08 15:49:51.215815 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-10-08 15:49:51.215821 | orchestrator | Wednesday 08 October 2025 15:45:19 +0000 (0:00:01.210) 0:06:16.178 ***** 2025-10-08 15:49:51.215828 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-10-08 15:49:51.215834 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-10-08 15:49:51.215840 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-10-08 15:49:51.215846 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:49:51.215852 | orchestrator | 2025-10-08 15:49:51.215859 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-10-08 15:49:51.215865 | orchestrator | Wednesday 08 October 2025 15:45:19 +0000 (0:00:00.589) 0:06:16.767 ***** 2025-10-08 15:49:51.215871 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:49:51.215877 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:49:51.215883 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:49:51.215889 | orchestrator | 2025-10-08 15:49:51.215901 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2025-10-08 15:49:51.215908 | orchestrator | 2025-10-08 15:49:51.215914 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-10-08 15:49:51.215920 | orchestrator | Wednesday 08 October 2025 15:45:20 +0000 (0:00:00.548) 0:06:17.316 ***** 2025-10-08 15:49:51.215926 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-10-08 15:49:51.215932 | orchestrator | 2025-10-08 15:49:51.215938 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-10-08 15:49:51.215944 | orchestrator | Wednesday 08 October 2025 15:45:21 +0000 (0:00:00.798) 0:06:18.114 ***** 2025-10-08 15:49:51.215951 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-10-08 15:49:51.215957 | orchestrator | 2025-10-08 15:49:51.215980 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-10-08 15:49:51.215987 | orchestrator | Wednesday 08 October 2025 15:45:21 +0000 (0:00:00.531) 0:06:18.646 ***** 2025-10-08 15:49:51.215993 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.215999 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:49:51.216006 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:49:51.216012 | orchestrator | 2025-10-08 15:49:51.216021 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-10-08 15:49:51.216027 | orchestrator | Wednesday 08 October 2025 15:45:22 +0000 (0:00:00.549) 0:06:19.196 ***** 2025-10-08 15:49:51.216034 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:49:51.216040 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:49:51.216046 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:49:51.216052 | orchestrator | 2025-10-08 15:49:51.216058 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-10-08 15:49:51.216064 | orchestrator | Wednesday 08 October 2025 15:45:22 +0000 (0:00:00.725) 0:06:19.922 ***** 2025-10-08 15:49:51.216070 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:49:51.216076 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:49:51.216082 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:49:51.216088 | orchestrator | 2025-10-08 15:49:51.216094 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-10-08 15:49:51.216100 | orchestrator | Wednesday 08 October 2025 15:45:23 +0000 (0:00:00.684) 0:06:20.606 ***** 2025-10-08 15:49:51.216106 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:49:51.216113 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:49:51.216119 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:49:51.216125 | orchestrator | 2025-10-08 15:49:51.216131 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-10-08 15:49:51.216137 | orchestrator | Wednesday 08 October 2025 15:45:24 +0000 (0:00:00.684) 0:06:21.291 ***** 2025-10-08 15:49:51.216143 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.216149 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:49:51.216185 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:49:51.216191 | orchestrator | 2025-10-08 15:49:51.216197 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-10-08 15:49:51.216203 | orchestrator | Wednesday 08 October 2025 15:45:24 +0000 (0:00:00.574) 0:06:21.865 ***** 2025-10-08 15:49:51.216210 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.216216 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:49:51.216222 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:49:51.216228 | orchestrator | 2025-10-08 15:49:51.216234 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-10-08 15:49:51.216240 | orchestrator | Wednesday 08 October 2025 15:45:25 +0000 (0:00:00.322) 0:06:22.188 ***** 2025-10-08 15:49:51.216247 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.216253 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:49:51.216259 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:49:51.216265 | orchestrator | 2025-10-08 15:49:51.216271 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-10-08 15:49:51.216282 | orchestrator | Wednesday 08 October 2025 15:45:25 +0000 (0:00:00.341) 0:06:22.529 ***** 2025-10-08 15:49:51.216289 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:49:51.216295 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:49:51.216301 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:49:51.216307 | orchestrator | 2025-10-08 15:49:51.216313 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-10-08 15:49:51.216319 | orchestrator | Wednesday 08 October 2025 15:45:26 +0000 (0:00:00.700) 0:06:23.229 ***** 2025-10-08 15:49:51.216325 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:49:51.216332 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:49:51.216338 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:49:51.216344 | orchestrator | 2025-10-08 15:49:51.216350 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-10-08 15:49:51.216356 | orchestrator | Wednesday 08 October 2025 15:45:27 +0000 (0:00:01.041) 0:06:24.271 ***** 2025-10-08 15:49:51.216362 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.216369 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:49:51.216375 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:49:51.216381 | orchestrator | 2025-10-08 15:49:51.216387 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-10-08 15:49:51.216393 | orchestrator | Wednesday 08 October 2025 15:45:27 +0000 (0:00:00.336) 0:06:24.607 ***** 2025-10-08 15:49:51.216399 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.216406 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:49:51.216412 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:49:51.216418 | orchestrator | 2025-10-08 15:49:51.216424 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-10-08 15:49:51.216430 | orchestrator | Wednesday 08 October 2025 15:45:27 +0000 (0:00:00.326) 0:06:24.933 ***** 2025-10-08 15:49:51.216436 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:49:51.216443 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:49:51.216449 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:49:51.216455 | orchestrator | 2025-10-08 15:49:51.216461 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-10-08 15:49:51.216467 | orchestrator | Wednesday 08 October 2025 15:45:28 +0000 (0:00:00.334) 0:06:25.268 ***** 2025-10-08 15:49:51.216473 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:49:51.216479 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:49:51.216485 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:49:51.216492 | orchestrator | 2025-10-08 15:49:51.216498 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-10-08 15:49:51.216504 | orchestrator | Wednesday 08 October 2025 15:45:28 +0000 (0:00:00.574) 0:06:25.843 ***** 2025-10-08 15:49:51.216510 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:49:51.216516 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:49:51.216522 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:49:51.216528 | orchestrator | 2025-10-08 15:49:51.216534 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-10-08 15:49:51.216541 | orchestrator | Wednesday 08 October 2025 15:45:29 +0000 (0:00:00.332) 0:06:26.176 ***** 2025-10-08 15:49:51.216547 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.216553 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:49:51.216559 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:49:51.216565 | orchestrator | 2025-10-08 15:49:51.216571 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-10-08 15:49:51.216581 | orchestrator | Wednesday 08 October 2025 15:45:29 +0000 (0:00:00.314) 0:06:26.490 ***** 2025-10-08 15:49:51.216587 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.216593 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:49:51.216600 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:49:51.216606 | orchestrator | 2025-10-08 15:49:51.216612 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-10-08 15:49:51.216621 | orchestrator | Wednesday 08 October 2025 15:45:29 +0000 (0:00:00.326) 0:06:26.817 ***** 2025-10-08 15:49:51.216632 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.216639 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:49:51.216645 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:49:51.216650 | orchestrator | 2025-10-08 15:49:51.216656 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-10-08 15:49:51.216661 | orchestrator | Wednesday 08 October 2025 15:45:30 +0000 (0:00:00.555) 0:06:27.372 ***** 2025-10-08 15:49:51.216667 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:49:51.216672 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:49:51.216678 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:49:51.216683 | orchestrator | 2025-10-08 15:49:51.216688 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-10-08 15:49:51.216694 | orchestrator | Wednesday 08 October 2025 15:45:30 +0000 (0:00:00.421) 0:06:27.793 ***** 2025-10-08 15:49:51.216699 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:49:51.216704 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:49:51.216710 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:49:51.216715 | orchestrator | 2025-10-08 15:49:51.216721 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2025-10-08 15:49:51.216726 | orchestrator | Wednesday 08 October 2025 15:45:31 +0000 (0:00:00.566) 0:06:28.360 ***** 2025-10-08 15:49:51.216731 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:49:51.216737 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:49:51.216742 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:49:51.216747 | orchestrator | 2025-10-08 15:49:51.216753 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2025-10-08 15:49:51.216758 | orchestrator | Wednesday 08 October 2025 15:45:31 +0000 (0:00:00.590) 0:06:28.950 ***** 2025-10-08 15:49:51.216764 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-10-08 15:49:51.216769 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-10-08 15:49:51.216774 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-10-08 15:49:51.216780 | orchestrator | 2025-10-08 15:49:51.216785 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2025-10-08 15:49:51.216790 | orchestrator | Wednesday 08 October 2025 15:45:32 +0000 (0:00:00.611) 0:06:29.562 ***** 2025-10-08 15:49:51.216796 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-10-08 15:49:51.216801 | orchestrator | 2025-10-08 15:49:51.216807 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2025-10-08 15:49:51.216812 | orchestrator | Wednesday 08 October 2025 15:45:33 +0000 (0:00:00.541) 0:06:30.104 ***** 2025-10-08 15:49:51.216817 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.216822 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:49:51.216828 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:49:51.216833 | orchestrator | 2025-10-08 15:49:51.216838 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2025-10-08 15:49:51.216844 | orchestrator | Wednesday 08 October 2025 15:45:33 +0000 (0:00:00.304) 0:06:30.409 ***** 2025-10-08 15:49:51.216849 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.216855 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:49:51.216860 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:49:51.216865 | orchestrator | 2025-10-08 15:49:51.216871 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2025-10-08 15:49:51.216876 | orchestrator | Wednesday 08 October 2025 15:45:33 +0000 (0:00:00.550) 0:06:30.959 ***** 2025-10-08 15:49:51.216881 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:49:51.216887 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:49:51.216892 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:49:51.216898 | orchestrator | 2025-10-08 15:49:51.216903 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2025-10-08 15:49:51.216908 | orchestrator | Wednesday 08 October 2025 15:45:34 +0000 (0:00:00.612) 0:06:31.571 ***** 2025-10-08 15:49:51.216917 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:49:51.216923 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:49:51.216928 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:49:51.216934 | orchestrator | 2025-10-08 15:49:51.216939 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2025-10-08 15:49:51.216944 | orchestrator | Wednesday 08 October 2025 15:45:34 +0000 (0:00:00.333) 0:06:31.905 ***** 2025-10-08 15:49:51.216949 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-10-08 15:49:51.216955 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-10-08 15:49:51.216960 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-10-08 15:49:51.216966 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-10-08 15:49:51.216971 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-10-08 15:49:51.216977 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-10-08 15:49:51.216982 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-10-08 15:49:51.216987 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-10-08 15:49:51.216993 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-10-08 15:49:51.217001 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-10-08 15:49:51.217007 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-10-08 15:49:51.217012 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-10-08 15:49:51.217021 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-10-08 15:49:51.217026 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-10-08 15:49:51.217032 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-10-08 15:49:51.217037 | orchestrator | 2025-10-08 15:49:51.217042 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2025-10-08 15:49:51.217048 | orchestrator | Wednesday 08 October 2025 15:45:36 +0000 (0:00:01.952) 0:06:33.858 ***** 2025-10-08 15:49:51.217053 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.217059 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:49:51.217064 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:49:51.217069 | orchestrator | 2025-10-08 15:49:51.217075 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2025-10-08 15:49:51.217080 | orchestrator | Wednesday 08 October 2025 15:45:37 +0000 (0:00:00.574) 0:06:34.432 ***** 2025-10-08 15:49:51.217085 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-10-08 15:49:51.217091 | orchestrator | 2025-10-08 15:49:51.217096 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2025-10-08 15:49:51.217102 | orchestrator | Wednesday 08 October 2025 15:45:37 +0000 (0:00:00.571) 0:06:35.004 ***** 2025-10-08 15:49:51.217107 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2025-10-08 15:49:51.217112 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2025-10-08 15:49:51.217118 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2025-10-08 15:49:51.217123 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2025-10-08 15:49:51.217128 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2025-10-08 15:49:51.217134 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2025-10-08 15:49:51.217139 | orchestrator | 2025-10-08 15:49:51.217145 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2025-10-08 15:49:51.217166 | orchestrator | Wednesday 08 October 2025 15:45:39 +0000 (0:00:01.020) 0:06:36.025 ***** 2025-10-08 15:49:51.217172 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-10-08 15:49:51.217177 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-10-08 15:49:51.217182 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-10-08 15:49:51.217188 | orchestrator | 2025-10-08 15:49:51.217193 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2025-10-08 15:49:51.217198 | orchestrator | Wednesday 08 October 2025 15:45:41 +0000 (0:00:02.425) 0:06:38.450 ***** 2025-10-08 15:49:51.217204 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-10-08 15:49:51.217209 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-10-08 15:49:51.217214 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:49:51.217220 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-10-08 15:49:51.217225 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-10-08 15:49:51.217231 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:49:51.217236 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-10-08 15:49:51.217241 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-10-08 15:49:51.217247 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:49:51.217252 | orchestrator | 2025-10-08 15:49:51.217258 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2025-10-08 15:49:51.217263 | orchestrator | Wednesday 08 October 2025 15:45:42 +0000 (0:00:01.458) 0:06:39.909 ***** 2025-10-08 15:49:51.217268 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-10-08 15:49:51.217274 | orchestrator | 2025-10-08 15:49:51.217279 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2025-10-08 15:49:51.217284 | orchestrator | Wednesday 08 October 2025 15:45:45 +0000 (0:00:02.176) 0:06:42.085 ***** 2025-10-08 15:49:51.217290 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-10-08 15:49:51.217295 | orchestrator | 2025-10-08 15:49:51.217300 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2025-10-08 15:49:51.217306 | orchestrator | Wednesday 08 October 2025 15:45:45 +0000 (0:00:00.579) 0:06:42.664 ***** 2025-10-08 15:49:51.217311 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-93919d76-3b82-5996-a675-e75a55626347', 'data_vg': 'ceph-93919d76-3b82-5996-a675-e75a55626347'}) 2025-10-08 15:49:51.217317 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-7ac75f6e-526f-52f0-b624-7532d6099aef', 'data_vg': 'ceph-7ac75f6e-526f-52f0-b624-7532d6099aef'}) 2025-10-08 15:49:51.217323 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-25f30e7b-7b9e-5d46-b3fc-d4cb59f24626', 'data_vg': 'ceph-25f30e7b-7b9e-5d46-b3fc-d4cb59f24626'}) 2025-10-08 15:49:51.217328 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-cead9db5-2c40-515a-bcee-782342d5bd60', 'data_vg': 'ceph-cead9db5-2c40-515a-bcee-782342d5bd60'}) 2025-10-08 15:49:51.217334 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-bafbc9f1-844e-58d3-a294-acb7fdea1516', 'data_vg': 'ceph-bafbc9f1-844e-58d3-a294-acb7fdea1516'}) 2025-10-08 15:49:51.217342 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-ff85ad2a-1d5d-50f9-b3a7-2f1eee54f485', 'data_vg': 'ceph-ff85ad2a-1d5d-50f9-b3a7-2f1eee54f485'}) 2025-10-08 15:49:51.217348 | orchestrator | 2025-10-08 15:49:51.217353 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2025-10-08 15:49:51.217362 | orchestrator | Wednesday 08 October 2025 15:46:26 +0000 (0:00:41.043) 0:07:23.708 ***** 2025-10-08 15:49:51.217367 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.217373 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:49:51.217378 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:49:51.217383 | orchestrator | 2025-10-08 15:49:51.217389 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2025-10-08 15:49:51.217398 | orchestrator | Wednesday 08 October 2025 15:46:27 +0000 (0:00:00.359) 0:07:24.067 ***** 2025-10-08 15:49:51.217404 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-10-08 15:49:51.217409 | orchestrator | 2025-10-08 15:49:51.217415 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2025-10-08 15:49:51.217420 | orchestrator | Wednesday 08 October 2025 15:46:27 +0000 (0:00:00.526) 0:07:24.594 ***** 2025-10-08 15:49:51.217425 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:49:51.217431 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:49:51.217436 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:49:51.217442 | orchestrator | 2025-10-08 15:49:51.217447 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2025-10-08 15:49:51.217452 | orchestrator | Wednesday 08 October 2025 15:46:28 +0000 (0:00:00.997) 0:07:25.591 ***** 2025-10-08 15:49:51.217458 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:49:51.217463 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:49:51.217468 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:49:51.217474 | orchestrator | 2025-10-08 15:49:51.217479 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2025-10-08 15:49:51.217484 | orchestrator | Wednesday 08 October 2025 15:46:31 +0000 (0:00:02.583) 0:07:28.175 ***** 2025-10-08 15:49:51.217490 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-10-08 15:49:51.217495 | orchestrator | 2025-10-08 15:49:51.217500 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2025-10-08 15:49:51.217506 | orchestrator | Wednesday 08 October 2025 15:46:31 +0000 (0:00:00.583) 0:07:28.759 ***** 2025-10-08 15:49:51.217511 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:49:51.217517 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:49:51.217522 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:49:51.217527 | orchestrator | 2025-10-08 15:49:51.217533 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2025-10-08 15:49:51.217538 | orchestrator | Wednesday 08 October 2025 15:46:33 +0000 (0:00:01.606) 0:07:30.366 ***** 2025-10-08 15:49:51.217544 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:49:51.217549 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:49:51.217554 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:49:51.217560 | orchestrator | 2025-10-08 15:49:51.217565 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2025-10-08 15:49:51.217570 | orchestrator | Wednesday 08 October 2025 15:46:34 +0000 (0:00:01.179) 0:07:31.545 ***** 2025-10-08 15:49:51.217576 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:49:51.217581 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:49:51.217587 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:49:51.217592 | orchestrator | 2025-10-08 15:49:51.217597 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2025-10-08 15:49:51.217603 | orchestrator | Wednesday 08 October 2025 15:46:36 +0000 (0:00:01.715) 0:07:33.260 ***** 2025-10-08 15:49:51.217608 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.217614 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:49:51.217619 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:49:51.217624 | orchestrator | 2025-10-08 15:49:51.217630 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2025-10-08 15:49:51.217635 | orchestrator | Wednesday 08 October 2025 15:46:36 +0000 (0:00:00.406) 0:07:33.667 ***** 2025-10-08 15:49:51.217640 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.217646 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:49:51.217651 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:49:51.217656 | orchestrator | 2025-10-08 15:49:51.217662 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2025-10-08 15:49:51.217667 | orchestrator | Wednesday 08 October 2025 15:46:37 +0000 (0:00:00.622) 0:07:34.289 ***** 2025-10-08 15:49:51.217673 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-10-08 15:49:51.217682 | orchestrator | ok: [testbed-node-4] => (item=1) 2025-10-08 15:49:51.217687 | orchestrator | ok: [testbed-node-5] => (item=2) 2025-10-08 15:49:51.217692 | orchestrator | ok: [testbed-node-3] => (item=5) 2025-10-08 15:49:51.217698 | orchestrator | ok: [testbed-node-4] => (item=4) 2025-10-08 15:49:51.217703 | orchestrator | ok: [testbed-node-5] => (item=3) 2025-10-08 15:49:51.217708 | orchestrator | 2025-10-08 15:49:51.217714 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2025-10-08 15:49:51.217719 | orchestrator | Wednesday 08 October 2025 15:46:38 +0000 (0:00:01.008) 0:07:35.297 ***** 2025-10-08 15:49:51.217725 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-10-08 15:49:51.217730 | orchestrator | changed: [testbed-node-4] => (item=1) 2025-10-08 15:49:51.217735 | orchestrator | changed: [testbed-node-5] => (item=2) 2025-10-08 15:49:51.217741 | orchestrator | changed: [testbed-node-3] => (item=5) 2025-10-08 15:49:51.217746 | orchestrator | changed: [testbed-node-4] => (item=4) 2025-10-08 15:49:51.217751 | orchestrator | changed: [testbed-node-5] => (item=3) 2025-10-08 15:49:51.217756 | orchestrator | 2025-10-08 15:49:51.217762 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2025-10-08 15:49:51.217767 | orchestrator | Wednesday 08 October 2025 15:46:40 +0000 (0:00:02.257) 0:07:37.555 ***** 2025-10-08 15:49:51.217773 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-10-08 15:49:51.217778 | orchestrator | changed: [testbed-node-4] => (item=1) 2025-10-08 15:49:51.217783 | orchestrator | changed: [testbed-node-5] => (item=2) 2025-10-08 15:49:51.217789 | orchestrator | changed: [testbed-node-3] => (item=5) 2025-10-08 15:49:51.217797 | orchestrator | changed: [testbed-node-4] => (item=4) 2025-10-08 15:49:51.217802 | orchestrator | changed: [testbed-node-5] => (item=3) 2025-10-08 15:49:51.217808 | orchestrator | 2025-10-08 15:49:51.217813 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2025-10-08 15:49:51.217819 | orchestrator | Wednesday 08 October 2025 15:46:44 +0000 (0:00:03.567) 0:07:41.123 ***** 2025-10-08 15:49:51.217827 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.217832 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:49:51.217838 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-10-08 15:49:51.217843 | orchestrator | 2025-10-08 15:49:51.217849 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2025-10-08 15:49:51.217854 | orchestrator | Wednesday 08 October 2025 15:46:47 +0000 (0:00:03.593) 0:07:44.717 ***** 2025-10-08 15:49:51.217860 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.217865 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:49:51.217870 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2025-10-08 15:49:51.217876 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-10-08 15:49:51.217881 | orchestrator | 2025-10-08 15:49:51.217887 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2025-10-08 15:49:51.217892 | orchestrator | Wednesday 08 October 2025 15:47:00 +0000 (0:00:12.640) 0:07:57.357 ***** 2025-10-08 15:49:51.217897 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.217903 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:49:51.217908 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:49:51.217914 | orchestrator | 2025-10-08 15:49:51.217919 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-10-08 15:49:51.217925 | orchestrator | Wednesday 08 October 2025 15:47:01 +0000 (0:00:01.140) 0:07:58.498 ***** 2025-10-08 15:49:51.217930 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.217935 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:49:51.217941 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:49:51.217946 | orchestrator | 2025-10-08 15:49:51.217952 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-10-08 15:49:51.217957 | orchestrator | Wednesday 08 October 2025 15:47:01 +0000 (0:00:00.393) 0:07:58.892 ***** 2025-10-08 15:49:51.217963 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-10-08 15:49:51.217973 | orchestrator | 2025-10-08 15:49:51.217978 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-10-08 15:49:51.217984 | orchestrator | Wednesday 08 October 2025 15:47:02 +0000 (0:00:00.510) 0:07:59.402 ***** 2025-10-08 15:49:51.217989 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-10-08 15:49:51.217994 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-10-08 15:49:51.218000 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-10-08 15:49:51.218005 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.218011 | orchestrator | 2025-10-08 15:49:51.218040 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-10-08 15:49:51.218046 | orchestrator | Wednesday 08 October 2025 15:47:03 +0000 (0:00:00.639) 0:08:00.041 ***** 2025-10-08 15:49:51.218051 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.218058 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:49:51.218063 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:49:51.218069 | orchestrator | 2025-10-08 15:49:51.218074 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-10-08 15:49:51.218080 | orchestrator | Wednesday 08 October 2025 15:47:03 +0000 (0:00:00.613) 0:08:00.654 ***** 2025-10-08 15:49:51.218085 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.218091 | orchestrator | 2025-10-08 15:49:51.218096 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-10-08 15:49:51.218102 | orchestrator | Wednesday 08 October 2025 15:47:03 +0000 (0:00:00.234) 0:08:00.889 ***** 2025-10-08 15:49:51.218107 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.218113 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:49:51.218119 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:49:51.218125 | orchestrator | 2025-10-08 15:49:51.218130 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-10-08 15:49:51.218135 | orchestrator | Wednesday 08 October 2025 15:47:04 +0000 (0:00:00.317) 0:08:01.206 ***** 2025-10-08 15:49:51.218141 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.218146 | orchestrator | 2025-10-08 15:49:51.218164 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-10-08 15:49:51.218170 | orchestrator | Wednesday 08 October 2025 15:47:04 +0000 (0:00:00.265) 0:08:01.471 ***** 2025-10-08 15:49:51.218175 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.218181 | orchestrator | 2025-10-08 15:49:51.218187 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-10-08 15:49:51.218192 | orchestrator | Wednesday 08 October 2025 15:47:04 +0000 (0:00:00.230) 0:08:01.702 ***** 2025-10-08 15:49:51.218198 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.218203 | orchestrator | 2025-10-08 15:49:51.218209 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-10-08 15:49:51.218214 | orchestrator | Wednesday 08 October 2025 15:47:04 +0000 (0:00:00.126) 0:08:01.829 ***** 2025-10-08 15:49:51.218219 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.218225 | orchestrator | 2025-10-08 15:49:51.218230 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-10-08 15:49:51.218236 | orchestrator | Wednesday 08 October 2025 15:47:05 +0000 (0:00:00.235) 0:08:02.064 ***** 2025-10-08 15:49:51.218241 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.218247 | orchestrator | 2025-10-08 15:49:51.218252 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-10-08 15:49:51.218258 | orchestrator | Wednesday 08 October 2025 15:47:05 +0000 (0:00:00.199) 0:08:02.264 ***** 2025-10-08 15:49:51.218263 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-10-08 15:49:51.218269 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-10-08 15:49:51.218278 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-10-08 15:49:51.218284 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.218298 | orchestrator | 2025-10-08 15:49:51.218303 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-10-08 15:49:51.218312 | orchestrator | Wednesday 08 October 2025 15:47:05 +0000 (0:00:00.716) 0:08:02.980 ***** 2025-10-08 15:49:51.218317 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.218323 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:49:51.218328 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:49:51.218334 | orchestrator | 2025-10-08 15:49:51.218339 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-10-08 15:49:51.218345 | orchestrator | Wednesday 08 October 2025 15:47:06 +0000 (0:00:00.603) 0:08:03.584 ***** 2025-10-08 15:49:51.218350 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.218356 | orchestrator | 2025-10-08 15:49:51.218361 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-10-08 15:49:51.218366 | orchestrator | Wednesday 08 October 2025 15:47:06 +0000 (0:00:00.209) 0:08:03.794 ***** 2025-10-08 15:49:51.218372 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.218377 | orchestrator | 2025-10-08 15:49:51.218383 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2025-10-08 15:49:51.218388 | orchestrator | 2025-10-08 15:49:51.218393 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-10-08 15:49:51.218399 | orchestrator | Wednesday 08 October 2025 15:47:07 +0000 (0:00:00.659) 0:08:04.454 ***** 2025-10-08 15:49:51.218404 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-08 15:49:51.218411 | orchestrator | 2025-10-08 15:49:51.218416 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-10-08 15:49:51.218421 | orchestrator | Wednesday 08 October 2025 15:47:08 +0000 (0:00:01.275) 0:08:05.729 ***** 2025-10-08 15:49:51.218427 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-08 15:49:51.218432 | orchestrator | 2025-10-08 15:49:51.218438 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-10-08 15:49:51.218443 | orchestrator | Wednesday 08 October 2025 15:47:09 +0000 (0:00:01.211) 0:08:06.941 ***** 2025-10-08 15:49:51.218448 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.218454 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:49:51.218459 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:49:51.218465 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:49:51.218470 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:49:51.218476 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:49:51.218481 | orchestrator | 2025-10-08 15:49:51.218486 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-10-08 15:49:51.218492 | orchestrator | Wednesday 08 October 2025 15:47:10 +0000 (0:00:00.889) 0:08:07.830 ***** 2025-10-08 15:49:51.218497 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:49:51.218503 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:49:51.218508 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:49:51.218513 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:49:51.218519 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:49:51.218524 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:49:51.218530 | orchestrator | 2025-10-08 15:49:51.218535 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-10-08 15:49:51.218540 | orchestrator | Wednesday 08 October 2025 15:47:11 +0000 (0:00:01.017) 0:08:08.848 ***** 2025-10-08 15:49:51.218546 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:49:51.218551 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:49:51.218557 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:49:51.218562 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:49:51.218567 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:49:51.218573 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:49:51.218578 | orchestrator | 2025-10-08 15:49:51.218588 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-10-08 15:49:51.218593 | orchestrator | Wednesday 08 October 2025 15:47:13 +0000 (0:00:01.305) 0:08:10.153 ***** 2025-10-08 15:49:51.218599 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:49:51.218604 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:49:51.218610 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:49:51.218615 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:49:51.218620 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:49:51.218626 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:49:51.218631 | orchestrator | 2025-10-08 15:49:51.218636 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-10-08 15:49:51.218642 | orchestrator | Wednesday 08 October 2025 15:47:14 +0000 (0:00:01.034) 0:08:11.187 ***** 2025-10-08 15:49:51.218647 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.218653 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:49:51.218658 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:49:51.218663 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:49:51.218669 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:49:51.218674 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:49:51.218679 | orchestrator | 2025-10-08 15:49:51.218685 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-10-08 15:49:51.218690 | orchestrator | Wednesday 08 October 2025 15:47:15 +0000 (0:00:00.891) 0:08:12.080 ***** 2025-10-08 15:49:51.218695 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:49:51.218701 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:49:51.218706 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:49:51.218711 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.218717 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:49:51.218722 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:49:51.218727 | orchestrator | 2025-10-08 15:49:51.218733 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-10-08 15:49:51.218738 | orchestrator | Wednesday 08 October 2025 15:47:15 +0000 (0:00:00.693) 0:08:12.773 ***** 2025-10-08 15:49:51.218744 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:49:51.218749 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:49:51.218754 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:49:51.218762 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.218768 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:49:51.218773 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:49:51.218779 | orchestrator | 2025-10-08 15:49:51.218784 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-10-08 15:49:51.218792 | orchestrator | Wednesday 08 October 2025 15:47:16 +0000 (0:00:00.894) 0:08:13.668 ***** 2025-10-08 15:49:51.218798 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:49:51.218803 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:49:51.218809 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:49:51.218814 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:49:51.218820 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:49:51.218825 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:49:51.218830 | orchestrator | 2025-10-08 15:49:51.218836 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-10-08 15:49:51.218841 | orchestrator | Wednesday 08 October 2025 15:47:17 +0000 (0:00:01.089) 0:08:14.757 ***** 2025-10-08 15:49:51.218847 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:49:51.218852 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:49:51.218857 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:49:51.218863 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:49:51.218868 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:49:51.218873 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:49:51.218879 | orchestrator | 2025-10-08 15:49:51.218884 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-10-08 15:49:51.218889 | orchestrator | Wednesday 08 October 2025 15:47:19 +0000 (0:00:01.318) 0:08:16.075 ***** 2025-10-08 15:49:51.218895 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:49:51.218904 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:49:51.218909 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:49:51.218915 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.218920 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:49:51.218925 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:49:51.218931 | orchestrator | 2025-10-08 15:49:51.218936 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-10-08 15:49:51.218941 | orchestrator | Wednesday 08 October 2025 15:47:19 +0000 (0:00:00.634) 0:08:16.710 ***** 2025-10-08 15:49:51.218947 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:49:51.218952 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:49:51.218958 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:49:51.218963 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.218968 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:49:51.218974 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:49:51.218979 | orchestrator | 2025-10-08 15:49:51.218985 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-10-08 15:49:51.218990 | orchestrator | Wednesday 08 October 2025 15:47:20 +0000 (0:00:00.888) 0:08:17.598 ***** 2025-10-08 15:49:51.218995 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:49:51.219001 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:49:51.219006 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:49:51.219011 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:49:51.219017 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:49:51.219022 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:49:51.219028 | orchestrator | 2025-10-08 15:49:51.219033 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-10-08 15:49:51.219038 | orchestrator | Wednesday 08 October 2025 15:47:21 +0000 (0:00:00.666) 0:08:18.265 ***** 2025-10-08 15:49:51.219044 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:49:51.219049 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:49:51.219055 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:49:51.219060 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:49:51.219065 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:49:51.219070 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:49:51.219076 | orchestrator | 2025-10-08 15:49:51.219081 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-10-08 15:49:51.219087 | orchestrator | Wednesday 08 October 2025 15:47:22 +0000 (0:00:00.859) 0:08:19.124 ***** 2025-10-08 15:49:51.219092 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:49:51.219098 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:49:51.219103 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:49:51.219108 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:49:51.219114 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:49:51.219119 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:49:51.219124 | orchestrator | 2025-10-08 15:49:51.219130 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-10-08 15:49:51.219135 | orchestrator | Wednesday 08 October 2025 15:47:22 +0000 (0:00:00.670) 0:08:19.795 ***** 2025-10-08 15:49:51.219140 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:49:51.219146 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:49:51.219176 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:49:51.219183 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.219188 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:49:51.219194 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:49:51.219199 | orchestrator | 2025-10-08 15:49:51.219205 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-10-08 15:49:51.219210 | orchestrator | Wednesday 08 October 2025 15:47:23 +0000 (0:00:00.606) 0:08:20.402 ***** 2025-10-08 15:49:51.219215 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:49:51.219221 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:49:51.219226 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:49:51.219232 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.219237 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:49:51.219247 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:49:51.219252 | orchestrator | 2025-10-08 15:49:51.219257 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-10-08 15:49:51.219262 | orchestrator | Wednesday 08 October 2025 15:47:24 +0000 (0:00:00.851) 0:08:21.253 ***** 2025-10-08 15:49:51.219267 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:49:51.219272 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:49:51.219276 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:49:51.219281 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.219286 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:49:51.219291 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:49:51.219295 | orchestrator | 2025-10-08 15:49:51.219300 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-10-08 15:49:51.219305 | orchestrator | Wednesday 08 October 2025 15:47:24 +0000 (0:00:00.615) 0:08:21.869 ***** 2025-10-08 15:49:51.219310 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:49:51.219315 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:49:51.219322 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:49:51.219327 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:49:51.219332 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:49:51.219337 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:49:51.219342 | orchestrator | 2025-10-08 15:49:51.219346 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-10-08 15:49:51.219354 | orchestrator | Wednesday 08 October 2025 15:47:25 +0000 (0:00:00.869) 0:08:22.739 ***** 2025-10-08 15:49:51.219359 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:49:51.219364 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:49:51.219369 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:49:51.219374 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:49:51.219378 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:49:51.219383 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:49:51.219388 | orchestrator | 2025-10-08 15:49:51.219393 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2025-10-08 15:49:51.219398 | orchestrator | Wednesday 08 October 2025 15:47:27 +0000 (0:00:01.267) 0:08:24.007 ***** 2025-10-08 15:49:51.219403 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:49:51.219407 | orchestrator | 2025-10-08 15:49:51.219412 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2025-10-08 15:49:51.219417 | orchestrator | Wednesday 08 October 2025 15:47:30 +0000 (0:00:03.901) 0:08:27.908 ***** 2025-10-08 15:49:51.219422 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:49:51.219427 | orchestrator | 2025-10-08 15:49:51.219432 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2025-10-08 15:49:51.219436 | orchestrator | Wednesday 08 October 2025 15:47:32 +0000 (0:00:02.060) 0:08:29.969 ***** 2025-10-08 15:49:51.219441 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:49:51.219446 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:49:51.219451 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:49:51.219456 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:49:51.219461 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:49:51.219465 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:49:51.219470 | orchestrator | 2025-10-08 15:49:51.219475 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2025-10-08 15:49:51.219480 | orchestrator | Wednesday 08 October 2025 15:47:34 +0000 (0:00:01.799) 0:08:31.768 ***** 2025-10-08 15:49:51.219485 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:49:51.219489 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:49:51.219494 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:49:51.219499 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:49:51.219504 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:49:51.219508 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:49:51.219513 | orchestrator | 2025-10-08 15:49:51.219518 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2025-10-08 15:49:51.219523 | orchestrator | Wednesday 08 October 2025 15:47:35 +0000 (0:00:00.985) 0:08:32.753 ***** 2025-10-08 15:49:51.219531 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-08 15:49:51.219536 | orchestrator | 2025-10-08 15:49:51.219541 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2025-10-08 15:49:51.219546 | orchestrator | Wednesday 08 October 2025 15:47:37 +0000 (0:00:01.253) 0:08:34.006 ***** 2025-10-08 15:49:51.219551 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:49:51.219556 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:49:51.219560 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:49:51.219565 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:49:51.219570 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:49:51.219575 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:49:51.219579 | orchestrator | 2025-10-08 15:49:51.219584 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2025-10-08 15:49:51.219589 | orchestrator | Wednesday 08 October 2025 15:47:38 +0000 (0:00:01.755) 0:08:35.762 ***** 2025-10-08 15:49:51.219594 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:49:51.219599 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:49:51.219603 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:49:51.219608 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:49:51.219613 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:49:51.219618 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:49:51.219622 | orchestrator | 2025-10-08 15:49:51.219627 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2025-10-08 15:49:51.219632 | orchestrator | Wednesday 08 October 2025 15:47:41 +0000 (0:00:03.190) 0:08:38.953 ***** 2025-10-08 15:49:51.219637 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-08 15:49:51.219642 | orchestrator | 2025-10-08 15:49:51.219647 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2025-10-08 15:49:51.219652 | orchestrator | Wednesday 08 October 2025 15:47:43 +0000 (0:00:01.285) 0:08:40.239 ***** 2025-10-08 15:49:51.219656 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:49:51.219661 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:49:51.219666 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:49:51.219671 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:49:51.219676 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:49:51.219681 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:49:51.219685 | orchestrator | 2025-10-08 15:49:51.219690 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2025-10-08 15:49:51.219695 | orchestrator | Wednesday 08 October 2025 15:47:43 +0000 (0:00:00.618) 0:08:40.857 ***** 2025-10-08 15:49:51.219700 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:49:51.219705 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:49:51.219710 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:49:51.219715 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:49:51.219719 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:49:51.219724 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:49:51.219729 | orchestrator | 2025-10-08 15:49:51.219734 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2025-10-08 15:49:51.219738 | orchestrator | Wednesday 08 October 2025 15:47:46 +0000 (0:00:02.465) 0:08:43.322 ***** 2025-10-08 15:49:51.219743 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:49:51.219748 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:49:51.219753 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:49:51.219758 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:49:51.219765 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:49:51.219770 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:49:51.219774 | orchestrator | 2025-10-08 15:49:51.219779 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2025-10-08 15:49:51.219784 | orchestrator | 2025-10-08 15:49:51.219789 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-10-08 15:49:51.219800 | orchestrator | Wednesday 08 October 2025 15:47:47 +0000 (0:00:01.179) 0:08:44.502 ***** 2025-10-08 15:49:51.219805 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-10-08 15:49:51.219810 | orchestrator | 2025-10-08 15:49:51.219815 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-10-08 15:49:51.219820 | orchestrator | Wednesday 08 October 2025 15:47:48 +0000 (0:00:00.535) 0:08:45.037 ***** 2025-10-08 15:49:51.219825 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-10-08 15:49:51.219830 | orchestrator | 2025-10-08 15:49:51.219835 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-10-08 15:49:51.219840 | orchestrator | Wednesday 08 October 2025 15:47:48 +0000 (0:00:00.757) 0:08:45.794 ***** 2025-10-08 15:49:51.219844 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.219849 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:49:51.219854 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:49:51.219859 | orchestrator | 2025-10-08 15:49:51.219864 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-10-08 15:49:51.219868 | orchestrator | Wednesday 08 October 2025 15:47:49 +0000 (0:00:00.351) 0:08:46.145 ***** 2025-10-08 15:49:51.219873 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:49:51.219878 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:49:51.219883 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:49:51.219888 | orchestrator | 2025-10-08 15:49:51.219893 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-10-08 15:49:51.219897 | orchestrator | Wednesday 08 October 2025 15:47:49 +0000 (0:00:00.707) 0:08:46.852 ***** 2025-10-08 15:49:51.219902 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:49:51.219907 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:49:51.219912 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:49:51.219917 | orchestrator | 2025-10-08 15:49:51.219921 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-10-08 15:49:51.219926 | orchestrator | Wednesday 08 October 2025 15:47:50 +0000 (0:00:00.816) 0:08:47.668 ***** 2025-10-08 15:49:51.219931 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:49:51.219936 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:49:51.219941 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:49:51.219945 | orchestrator | 2025-10-08 15:49:51.219950 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-10-08 15:49:51.219955 | orchestrator | Wednesday 08 October 2025 15:47:51 +0000 (0:00:01.095) 0:08:48.764 ***** 2025-10-08 15:49:51.219960 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.219973 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:49:51.219978 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:49:51.219982 | orchestrator | 2025-10-08 15:49:51.219987 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-10-08 15:49:51.219992 | orchestrator | Wednesday 08 October 2025 15:47:52 +0000 (0:00:00.370) 0:08:49.134 ***** 2025-10-08 15:49:51.219997 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.220002 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:49:51.220007 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:49:51.220012 | orchestrator | 2025-10-08 15:49:51.220017 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-10-08 15:49:51.220021 | orchestrator | Wednesday 08 October 2025 15:47:52 +0000 (0:00:00.318) 0:08:49.453 ***** 2025-10-08 15:49:51.220026 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.220031 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:49:51.220036 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:49:51.220041 | orchestrator | 2025-10-08 15:49:51.220046 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-10-08 15:49:51.220050 | orchestrator | Wednesday 08 October 2025 15:47:52 +0000 (0:00:00.292) 0:08:49.746 ***** 2025-10-08 15:49:51.220055 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:49:51.220064 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:49:51.220068 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:49:51.220073 | orchestrator | 2025-10-08 15:49:51.220078 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-10-08 15:49:51.220083 | orchestrator | Wednesday 08 October 2025 15:47:53 +0000 (0:00:01.099) 0:08:50.846 ***** 2025-10-08 15:49:51.220088 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:49:51.220093 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:49:51.220098 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:49:51.220102 | orchestrator | 2025-10-08 15:49:51.220107 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-10-08 15:49:51.220112 | orchestrator | Wednesday 08 October 2025 15:47:54 +0000 (0:00:00.768) 0:08:51.614 ***** 2025-10-08 15:49:51.220117 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.220122 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:49:51.220127 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:49:51.220131 | orchestrator | 2025-10-08 15:49:51.220136 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-10-08 15:49:51.220141 | orchestrator | Wednesday 08 October 2025 15:47:54 +0000 (0:00:00.317) 0:08:51.932 ***** 2025-10-08 15:49:51.220146 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.220160 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:49:51.220165 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:49:51.220170 | orchestrator | 2025-10-08 15:49:51.220175 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-10-08 15:49:51.220180 | orchestrator | Wednesday 08 October 2025 15:47:55 +0000 (0:00:00.371) 0:08:52.303 ***** 2025-10-08 15:49:51.220185 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:49:51.220190 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:49:51.220194 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:49:51.220199 | orchestrator | 2025-10-08 15:49:51.220204 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-10-08 15:49:51.220212 | orchestrator | Wednesday 08 October 2025 15:47:55 +0000 (0:00:00.601) 0:08:52.905 ***** 2025-10-08 15:49:51.220217 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:49:51.220221 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:49:51.220226 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:49:51.220231 | orchestrator | 2025-10-08 15:49:51.220236 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-10-08 15:49:51.220243 | orchestrator | Wednesday 08 October 2025 15:47:56 +0000 (0:00:00.358) 0:08:53.263 ***** 2025-10-08 15:49:51.220248 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:49:51.220253 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:49:51.220258 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:49:51.220262 | orchestrator | 2025-10-08 15:49:51.220267 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-10-08 15:49:51.220272 | orchestrator | Wednesday 08 October 2025 15:47:56 +0000 (0:00:00.355) 0:08:53.618 ***** 2025-10-08 15:49:51.220277 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.220282 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:49:51.220287 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:49:51.220291 | orchestrator | 2025-10-08 15:49:51.220296 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-10-08 15:49:51.220301 | orchestrator | Wednesday 08 October 2025 15:47:56 +0000 (0:00:00.316) 0:08:53.935 ***** 2025-10-08 15:49:51.220306 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.220311 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:49:51.220316 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:49:51.220321 | orchestrator | 2025-10-08 15:49:51.220325 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-10-08 15:49:51.220330 | orchestrator | Wednesday 08 October 2025 15:47:57 +0000 (0:00:00.636) 0:08:54.571 ***** 2025-10-08 15:49:51.220335 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.220340 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:49:51.220345 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:49:51.220353 | orchestrator | 2025-10-08 15:49:51.220358 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-10-08 15:49:51.220363 | orchestrator | Wednesday 08 October 2025 15:47:57 +0000 (0:00:00.415) 0:08:54.987 ***** 2025-10-08 15:49:51.220367 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:49:51.220372 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:49:51.220377 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:49:51.220382 | orchestrator | 2025-10-08 15:49:51.220387 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-10-08 15:49:51.220391 | orchestrator | Wednesday 08 October 2025 15:47:58 +0000 (0:00:00.334) 0:08:55.321 ***** 2025-10-08 15:49:51.220396 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:49:51.220401 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:49:51.220406 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:49:51.220411 | orchestrator | 2025-10-08 15:49:51.220416 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2025-10-08 15:49:51.220420 | orchestrator | Wednesday 08 October 2025 15:47:59 +0000 (0:00:00.834) 0:08:56.155 ***** 2025-10-08 15:49:51.220425 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:49:51.220430 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:49:51.220435 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2025-10-08 15:49:51.220440 | orchestrator | 2025-10-08 15:49:51.220445 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2025-10-08 15:49:51.220450 | orchestrator | Wednesday 08 October 2025 15:47:59 +0000 (0:00:00.442) 0:08:56.598 ***** 2025-10-08 15:49:51.220454 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-10-08 15:49:51.220459 | orchestrator | 2025-10-08 15:49:51.220464 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2025-10-08 15:49:51.220469 | orchestrator | Wednesday 08 October 2025 15:48:01 +0000 (0:00:02.048) 0:08:58.646 ***** 2025-10-08 15:49:51.220474 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2025-10-08 15:49:51.220480 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.220485 | orchestrator | 2025-10-08 15:49:51.220490 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2025-10-08 15:49:51.220495 | orchestrator | Wednesday 08 October 2025 15:48:01 +0000 (0:00:00.209) 0:08:58.856 ***** 2025-10-08 15:49:51.220500 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-10-08 15:49:51.220510 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-10-08 15:49:51.220515 | orchestrator | 2025-10-08 15:49:51.220520 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2025-10-08 15:49:51.220525 | orchestrator | Wednesday 08 October 2025 15:48:09 +0000 (0:00:07.912) 0:09:06.769 ***** 2025-10-08 15:49:51.220530 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-10-08 15:49:51.220535 | orchestrator | 2025-10-08 15:49:51.220539 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2025-10-08 15:49:51.220544 | orchestrator | Wednesday 08 October 2025 15:48:13 +0000 (0:00:03.610) 0:09:10.379 ***** 2025-10-08 15:49:51.220549 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-10-08 15:49:51.220554 | orchestrator | 2025-10-08 15:49:51.220559 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2025-10-08 15:49:51.220569 | orchestrator | Wednesday 08 October 2025 15:48:14 +0000 (0:00:00.807) 0:09:11.186 ***** 2025-10-08 15:49:51.220574 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2025-10-08 15:49:51.220579 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2025-10-08 15:49:51.220584 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2025-10-08 15:49:51.220591 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2025-10-08 15:49:51.220596 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2025-10-08 15:49:51.220601 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2025-10-08 15:49:51.220606 | orchestrator | 2025-10-08 15:49:51.220610 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2025-10-08 15:49:51.220615 | orchestrator | Wednesday 08 October 2025 15:48:15 +0000 (0:00:01.162) 0:09:12.349 ***** 2025-10-08 15:49:51.220620 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-10-08 15:49:51.220624 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-10-08 15:49:51.220629 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-10-08 15:49:51.220634 | orchestrator | 2025-10-08 15:49:51.220639 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2025-10-08 15:49:51.220643 | orchestrator | Wednesday 08 October 2025 15:48:17 +0000 (0:00:02.101) 0:09:14.450 ***** 2025-10-08 15:49:51.220648 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-10-08 15:49:51.220653 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-10-08 15:49:51.220658 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:49:51.220663 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-10-08 15:49:51.220668 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-10-08 15:49:51.220672 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-10-08 15:49:51.220677 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:49:51.220682 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-10-08 15:49:51.220687 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:49:51.220692 | orchestrator | 2025-10-08 15:49:51.220696 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2025-10-08 15:49:51.220701 | orchestrator | Wednesday 08 October 2025 15:48:18 +0000 (0:00:01.265) 0:09:15.716 ***** 2025-10-08 15:49:51.220706 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:49:51.220710 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:49:51.220715 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:49:51.220720 | orchestrator | 2025-10-08 15:49:51.220725 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2025-10-08 15:49:51.220730 | orchestrator | Wednesday 08 October 2025 15:48:21 +0000 (0:00:03.039) 0:09:18.755 ***** 2025-10-08 15:49:51.220735 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.220739 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:49:51.220744 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:49:51.220749 | orchestrator | 2025-10-08 15:49:51.220754 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2025-10-08 15:49:51.220759 | orchestrator | Wednesday 08 October 2025 15:48:22 +0000 (0:00:00.321) 0:09:19.077 ***** 2025-10-08 15:49:51.220763 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-5, testbed-node-4 2025-10-08 15:49:51.220768 | orchestrator | 2025-10-08 15:49:51.220773 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2025-10-08 15:49:51.220778 | orchestrator | Wednesday 08 October 2025 15:48:22 +0000 (0:00:00.598) 0:09:19.676 ***** 2025-10-08 15:49:51.220783 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-10-08 15:49:51.220787 | orchestrator | 2025-10-08 15:49:51.220792 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2025-10-08 15:49:51.220797 | orchestrator | Wednesday 08 October 2025 15:48:23 +0000 (0:00:00.860) 0:09:20.536 ***** 2025-10-08 15:49:51.220808 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:49:51.220813 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:49:51.220818 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:49:51.220823 | orchestrator | 2025-10-08 15:49:51.220828 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2025-10-08 15:49:51.220832 | orchestrator | Wednesday 08 October 2025 15:48:24 +0000 (0:00:01.196) 0:09:21.733 ***** 2025-10-08 15:49:51.220837 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:49:51.220842 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:49:51.220847 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:49:51.220851 | orchestrator | 2025-10-08 15:49:51.220856 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2025-10-08 15:49:51.220861 | orchestrator | Wednesday 08 October 2025 15:48:25 +0000 (0:00:01.213) 0:09:22.946 ***** 2025-10-08 15:49:51.220866 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:49:51.220870 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:49:51.220875 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:49:51.220880 | orchestrator | 2025-10-08 15:49:51.220885 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2025-10-08 15:49:51.220890 | orchestrator | Wednesday 08 October 2025 15:48:28 +0000 (0:00:02.395) 0:09:25.342 ***** 2025-10-08 15:49:51.220895 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:49:51.220900 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:49:51.220904 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:49:51.220909 | orchestrator | 2025-10-08 15:49:51.220914 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2025-10-08 15:49:51.220919 | orchestrator | Wednesday 08 October 2025 15:48:30 +0000 (0:00:01.998) 0:09:27.340 ***** 2025-10-08 15:49:51.220924 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:49:51.220929 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:49:51.220934 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:49:51.220938 | orchestrator | 2025-10-08 15:49:51.220944 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-10-08 15:49:51.220948 | orchestrator | Wednesday 08 October 2025 15:48:31 +0000 (0:00:01.320) 0:09:28.660 ***** 2025-10-08 15:49:51.220956 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:49:51.220961 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:49:51.220965 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:49:51.220970 | orchestrator | 2025-10-08 15:49:51.220975 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-10-08 15:49:51.220980 | orchestrator | Wednesday 08 October 2025 15:48:32 +0000 (0:00:00.669) 0:09:29.330 ***** 2025-10-08 15:49:51.220987 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-10-08 15:49:51.220993 | orchestrator | 2025-10-08 15:49:51.220997 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-10-08 15:49:51.221002 | orchestrator | Wednesday 08 October 2025 15:48:32 +0000 (0:00:00.431) 0:09:29.761 ***** 2025-10-08 15:49:51.221007 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:49:51.221012 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:49:51.221017 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:49:51.221021 | orchestrator | 2025-10-08 15:49:51.221026 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-10-08 15:49:51.221031 | orchestrator | Wednesday 08 October 2025 15:48:33 +0000 (0:00:00.527) 0:09:30.289 ***** 2025-10-08 15:49:51.221036 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:49:51.221041 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:49:51.221046 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:49:51.221050 | orchestrator | 2025-10-08 15:49:51.221055 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-10-08 15:49:51.221060 | orchestrator | Wednesday 08 October 2025 15:48:34 +0000 (0:00:01.408) 0:09:31.698 ***** 2025-10-08 15:49:51.221065 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-10-08 15:49:51.221073 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-10-08 15:49:51.221078 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-10-08 15:49:51.221083 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.221088 | orchestrator | 2025-10-08 15:49:51.221093 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-10-08 15:49:51.221098 | orchestrator | Wednesday 08 October 2025 15:48:35 +0000 (0:00:01.066) 0:09:32.764 ***** 2025-10-08 15:49:51.221103 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:49:51.221107 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:49:51.221112 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:49:51.221117 | orchestrator | 2025-10-08 15:49:51.221122 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-10-08 15:49:51.221127 | orchestrator | 2025-10-08 15:49:51.221131 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-10-08 15:49:51.221136 | orchestrator | Wednesday 08 October 2025 15:48:36 +0000 (0:00:00.947) 0:09:33.711 ***** 2025-10-08 15:49:51.221141 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-10-08 15:49:51.221146 | orchestrator | 2025-10-08 15:49:51.221159 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-10-08 15:49:51.221164 | orchestrator | Wednesday 08 October 2025 15:48:37 +0000 (0:00:00.514) 0:09:34.226 ***** 2025-10-08 15:49:51.221169 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-10-08 15:49:51.221174 | orchestrator | 2025-10-08 15:49:51.221179 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-10-08 15:49:51.221183 | orchestrator | Wednesday 08 October 2025 15:48:38 +0000 (0:00:00.793) 0:09:35.019 ***** 2025-10-08 15:49:51.221188 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.221193 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:49:51.221198 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:49:51.221203 | orchestrator | 2025-10-08 15:49:51.221208 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-10-08 15:49:51.221213 | orchestrator | Wednesday 08 October 2025 15:48:38 +0000 (0:00:00.425) 0:09:35.444 ***** 2025-10-08 15:49:51.221217 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:49:51.221222 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:49:51.221227 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:49:51.221232 | orchestrator | 2025-10-08 15:49:51.221237 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-10-08 15:49:51.221242 | orchestrator | Wednesday 08 October 2025 15:48:39 +0000 (0:00:00.913) 0:09:36.358 ***** 2025-10-08 15:49:51.221246 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:49:51.221251 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:49:51.221256 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:49:51.221261 | orchestrator | 2025-10-08 15:49:51.221266 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-10-08 15:49:51.221270 | orchestrator | Wednesday 08 October 2025 15:48:40 +0000 (0:00:00.728) 0:09:37.086 ***** 2025-10-08 15:49:51.221275 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:49:51.221280 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:49:51.221285 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:49:51.221290 | orchestrator | 2025-10-08 15:49:51.221294 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-10-08 15:49:51.221299 | orchestrator | Wednesday 08 October 2025 15:48:41 +0000 (0:00:00.989) 0:09:38.076 ***** 2025-10-08 15:49:51.221304 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.221309 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:49:51.221314 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:49:51.221318 | orchestrator | 2025-10-08 15:49:51.221323 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-10-08 15:49:51.221328 | orchestrator | Wednesday 08 October 2025 15:48:41 +0000 (0:00:00.331) 0:09:38.407 ***** 2025-10-08 15:49:51.221337 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.221342 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:49:51.221347 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:49:51.221352 | orchestrator | 2025-10-08 15:49:51.221357 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-10-08 15:49:51.221361 | orchestrator | Wednesday 08 October 2025 15:48:41 +0000 (0:00:00.332) 0:09:38.740 ***** 2025-10-08 15:49:51.221366 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.221371 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:49:51.221378 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:49:51.221383 | orchestrator | 2025-10-08 15:49:51.221388 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-10-08 15:49:51.221393 | orchestrator | Wednesday 08 October 2025 15:48:42 +0000 (0:00:00.336) 0:09:39.076 ***** 2025-10-08 15:49:51.221398 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:49:51.221403 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:49:51.221410 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:49:51.221415 | orchestrator | 2025-10-08 15:49:51.221420 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-10-08 15:49:51.221425 | orchestrator | Wednesday 08 October 2025 15:48:43 +0000 (0:00:01.083) 0:09:40.160 ***** 2025-10-08 15:49:51.221430 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:49:51.221435 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:49:51.221439 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:49:51.221444 | orchestrator | 2025-10-08 15:49:51.221449 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-10-08 15:49:51.221454 | orchestrator | Wednesday 08 October 2025 15:48:43 +0000 (0:00:00.718) 0:09:40.878 ***** 2025-10-08 15:49:51.221459 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.221463 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:49:51.221468 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:49:51.221473 | orchestrator | 2025-10-08 15:49:51.221478 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-10-08 15:49:51.221483 | orchestrator | Wednesday 08 October 2025 15:48:44 +0000 (0:00:00.311) 0:09:41.190 ***** 2025-10-08 15:49:51.221488 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.221492 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:49:51.221497 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:49:51.221502 | orchestrator | 2025-10-08 15:49:51.221507 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-10-08 15:49:51.221512 | orchestrator | Wednesday 08 October 2025 15:48:44 +0000 (0:00:00.333) 0:09:41.523 ***** 2025-10-08 15:49:51.221516 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:49:51.221521 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:49:51.221526 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:49:51.221531 | orchestrator | 2025-10-08 15:49:51.221536 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-10-08 15:49:51.221540 | orchestrator | Wednesday 08 October 2025 15:48:44 +0000 (0:00:00.322) 0:09:41.845 ***** 2025-10-08 15:49:51.221545 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:49:51.221550 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:49:51.221555 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:49:51.221560 | orchestrator | 2025-10-08 15:49:51.221564 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-10-08 15:49:51.221569 | orchestrator | Wednesday 08 October 2025 15:48:45 +0000 (0:00:00.619) 0:09:42.465 ***** 2025-10-08 15:49:51.221574 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:49:51.221579 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:49:51.221584 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:49:51.221589 | orchestrator | 2025-10-08 15:49:51.221593 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-10-08 15:49:51.221598 | orchestrator | Wednesday 08 October 2025 15:48:45 +0000 (0:00:00.342) 0:09:42.807 ***** 2025-10-08 15:49:51.221603 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.221608 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:49:51.221616 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:49:51.221621 | orchestrator | 2025-10-08 15:49:51.221625 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-10-08 15:49:51.221630 | orchestrator | Wednesday 08 October 2025 15:48:46 +0000 (0:00:00.323) 0:09:43.131 ***** 2025-10-08 15:49:51.221635 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.221640 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:49:51.221645 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:49:51.221649 | orchestrator | 2025-10-08 15:49:51.221654 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-10-08 15:49:51.221659 | orchestrator | Wednesday 08 October 2025 15:48:46 +0000 (0:00:00.294) 0:09:43.426 ***** 2025-10-08 15:49:51.221664 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.221669 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:49:51.221674 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:49:51.221678 | orchestrator | 2025-10-08 15:49:51.221683 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-10-08 15:49:51.221688 | orchestrator | Wednesday 08 October 2025 15:48:47 +0000 (0:00:00.601) 0:09:44.027 ***** 2025-10-08 15:49:51.221693 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:49:51.221698 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:49:51.221702 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:49:51.221707 | orchestrator | 2025-10-08 15:49:51.221712 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-10-08 15:49:51.221717 | orchestrator | Wednesday 08 October 2025 15:48:47 +0000 (0:00:00.354) 0:09:44.382 ***** 2025-10-08 15:49:51.221721 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:49:51.221726 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:49:51.221731 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:49:51.221736 | orchestrator | 2025-10-08 15:49:51.221741 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2025-10-08 15:49:51.221745 | orchestrator | Wednesday 08 October 2025 15:48:47 +0000 (0:00:00.546) 0:09:44.928 ***** 2025-10-08 15:49:51.221750 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-10-08 15:49:51.221755 | orchestrator | 2025-10-08 15:49:51.221760 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-10-08 15:49:51.221764 | orchestrator | Wednesday 08 October 2025 15:48:48 +0000 (0:00:00.806) 0:09:45.734 ***** 2025-10-08 15:49:51.221769 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-10-08 15:49:51.221774 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-10-08 15:49:51.221779 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-10-08 15:49:51.221784 | orchestrator | 2025-10-08 15:49:51.221789 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-10-08 15:49:51.221793 | orchestrator | Wednesday 08 October 2025 15:48:50 +0000 (0:00:02.245) 0:09:47.980 ***** 2025-10-08 15:49:51.221798 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-10-08 15:49:51.221805 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-10-08 15:49:51.221810 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:49:51.221815 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-10-08 15:49:51.221820 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-10-08 15:49:51.221825 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:49:51.221832 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-10-08 15:49:51.221837 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-10-08 15:49:51.221842 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:49:51.221847 | orchestrator | 2025-10-08 15:49:51.221852 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2025-10-08 15:49:51.221857 | orchestrator | Wednesday 08 October 2025 15:48:52 +0000 (0:00:01.210) 0:09:49.190 ***** 2025-10-08 15:49:51.221861 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.221866 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:49:51.221874 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:49:51.221879 | orchestrator | 2025-10-08 15:49:51.221884 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2025-10-08 15:49:51.221888 | orchestrator | Wednesday 08 October 2025 15:48:52 +0000 (0:00:00.574) 0:09:49.765 ***** 2025-10-08 15:49:51.221893 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-10-08 15:49:51.221898 | orchestrator | 2025-10-08 15:49:51.221903 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2025-10-08 15:49:51.221908 | orchestrator | Wednesday 08 October 2025 15:48:53 +0000 (0:00:00.566) 0:09:50.331 ***** 2025-10-08 15:49:51.221913 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-10-08 15:49:51.221918 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-10-08 15:49:51.221923 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-10-08 15:49:51.221928 | orchestrator | 2025-10-08 15:49:51.221932 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2025-10-08 15:49:51.221937 | orchestrator | Wednesday 08 October 2025 15:48:54 +0000 (0:00:00.787) 0:09:51.119 ***** 2025-10-08 15:49:51.221942 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-10-08 15:49:51.221947 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-10-08 15:49:51.221952 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-10-08 15:49:51.221956 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-10-08 15:49:51.221961 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-10-08 15:49:51.221966 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-10-08 15:49:51.221971 | orchestrator | 2025-10-08 15:49:51.221976 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-10-08 15:49:51.221981 | orchestrator | Wednesday 08 October 2025 15:48:58 +0000 (0:00:04.851) 0:09:55.970 ***** 2025-10-08 15:49:51.221985 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-10-08 15:49:51.221990 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-10-08 15:49:51.221995 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-10-08 15:49:51.222000 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2025-10-08 15:49:51.222004 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-10-08 15:49:51.222009 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-10-08 15:49:51.222038 | orchestrator | 2025-10-08 15:49:51.222043 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-10-08 15:49:51.222048 | orchestrator | Wednesday 08 October 2025 15:49:01 +0000 (0:00:02.318) 0:09:58.289 ***** 2025-10-08 15:49:51.222053 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-10-08 15:49:51.222058 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:49:51.222063 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-10-08 15:49:51.222067 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:49:51.222072 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-10-08 15:49:51.222077 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:49:51.222082 | orchestrator | 2025-10-08 15:49:51.222087 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2025-10-08 15:49:51.222095 | orchestrator | Wednesday 08 October 2025 15:49:02 +0000 (0:00:01.264) 0:09:59.554 ***** 2025-10-08 15:49:51.222099 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2025-10-08 15:49:51.222104 | orchestrator | 2025-10-08 15:49:51.222109 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2025-10-08 15:49:51.222114 | orchestrator | Wednesday 08 October 2025 15:49:02 +0000 (0:00:00.254) 0:09:59.808 ***** 2025-10-08 15:49:51.222119 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-10-08 15:49:51.222126 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-10-08 15:49:51.222131 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-10-08 15:49:51.222139 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-10-08 15:49:51.222144 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-10-08 15:49:51.222149 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.222179 | orchestrator | 2025-10-08 15:49:51.222185 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2025-10-08 15:49:51.222189 | orchestrator | Wednesday 08 October 2025 15:49:03 +0000 (0:00:00.952) 0:10:00.760 ***** 2025-10-08 15:49:51.222194 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-10-08 15:49:51.222199 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-10-08 15:49:51.222204 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-10-08 15:49:51.222209 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-10-08 15:49:51.222214 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-10-08 15:49:51.222219 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.222223 | orchestrator | 2025-10-08 15:49:51.222228 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2025-10-08 15:49:51.222233 | orchestrator | Wednesday 08 October 2025 15:49:04 +0000 (0:00:00.912) 0:10:01.673 ***** 2025-10-08 15:49:51.222237 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-10-08 15:49:51.222242 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-10-08 15:49:51.222247 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-10-08 15:49:51.222252 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-10-08 15:49:51.222256 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-10-08 15:49:51.222261 | orchestrator | 2025-10-08 15:49:51.222265 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2025-10-08 15:49:51.222270 | orchestrator | Wednesday 08 October 2025 15:49:36 +0000 (0:00:31.668) 0:10:33.341 ***** 2025-10-08 15:49:51.222274 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.222283 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:49:51.222287 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:49:51.222292 | orchestrator | 2025-10-08 15:49:51.222296 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2025-10-08 15:49:51.222301 | orchestrator | Wednesday 08 October 2025 15:49:36 +0000 (0:00:00.595) 0:10:33.937 ***** 2025-10-08 15:49:51.222305 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.222310 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:49:51.222314 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:49:51.222319 | orchestrator | 2025-10-08 15:49:51.222323 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2025-10-08 15:49:51.222328 | orchestrator | Wednesday 08 October 2025 15:49:37 +0000 (0:00:00.344) 0:10:34.281 ***** 2025-10-08 15:49:51.222332 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-10-08 15:49:51.222337 | orchestrator | 2025-10-08 15:49:51.222342 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2025-10-08 15:49:51.222346 | orchestrator | Wednesday 08 October 2025 15:49:37 +0000 (0:00:00.556) 0:10:34.838 ***** 2025-10-08 15:49:51.222351 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-10-08 15:49:51.222355 | orchestrator | 2025-10-08 15:49:51.222360 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2025-10-08 15:49:51.222364 | orchestrator | Wednesday 08 October 2025 15:49:38 +0000 (0:00:00.785) 0:10:35.624 ***** 2025-10-08 15:49:51.222369 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:49:51.222373 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:49:51.222378 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:49:51.222382 | orchestrator | 2025-10-08 15:49:51.222387 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2025-10-08 15:49:51.222391 | orchestrator | Wednesday 08 October 2025 15:49:40 +0000 (0:00:01.401) 0:10:37.026 ***** 2025-10-08 15:49:51.222396 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:49:51.222400 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:49:51.222405 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:49:51.222409 | orchestrator | 2025-10-08 15:49:51.222416 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2025-10-08 15:49:51.222421 | orchestrator | Wednesday 08 October 2025 15:49:41 +0000 (0:00:01.176) 0:10:38.202 ***** 2025-10-08 15:49:51.222425 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:49:51.222430 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:49:51.222438 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:49:51.222442 | orchestrator | 2025-10-08 15:49:51.222447 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2025-10-08 15:49:51.222451 | orchestrator | Wednesday 08 October 2025 15:49:43 +0000 (0:00:02.101) 0:10:40.303 ***** 2025-10-08 15:49:51.222456 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-10-08 15:49:51.222461 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-10-08 15:49:51.222465 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-10-08 15:49:51.222470 | orchestrator | 2025-10-08 15:49:51.222474 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-10-08 15:49:51.222479 | orchestrator | Wednesday 08 October 2025 15:49:45 +0000 (0:00:02.340) 0:10:42.644 ***** 2025-10-08 15:49:51.222483 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.222488 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:49:51.222492 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:49:51.222497 | orchestrator | 2025-10-08 15:49:51.222501 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-10-08 15:49:51.222509 | orchestrator | Wednesday 08 October 2025 15:49:46 +0000 (0:00:00.643) 0:10:43.287 ***** 2025-10-08 15:49:51.222513 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-10-08 15:49:51.222518 | orchestrator | 2025-10-08 15:49:51.222522 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-10-08 15:49:51.222527 | orchestrator | Wednesday 08 October 2025 15:49:46 +0000 (0:00:00.519) 0:10:43.807 ***** 2025-10-08 15:49:51.222531 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:49:51.222536 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:49:51.222541 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:49:51.222545 | orchestrator | 2025-10-08 15:49:51.222549 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-10-08 15:49:51.222554 | orchestrator | Wednesday 08 October 2025 15:49:47 +0000 (0:00:00.351) 0:10:44.159 ***** 2025-10-08 15:49:51.222558 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.222563 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:49:51.222567 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:49:51.222572 | orchestrator | 2025-10-08 15:49:51.222577 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-10-08 15:49:51.222581 | orchestrator | Wednesday 08 October 2025 15:49:47 +0000 (0:00:00.580) 0:10:44.739 ***** 2025-10-08 15:49:51.222585 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-10-08 15:49:51.222590 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-10-08 15:49:51.222595 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-10-08 15:49:51.222599 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:49:51.222604 | orchestrator | 2025-10-08 15:49:51.222608 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-10-08 15:49:51.222613 | orchestrator | Wednesday 08 October 2025 15:49:48 +0000 (0:00:00.651) 0:10:45.391 ***** 2025-10-08 15:49:51.222617 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:49:51.222622 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:49:51.222626 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:49:51.222631 | orchestrator | 2025-10-08 15:49:51.222635 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-08 15:49:51.222640 | orchestrator | testbed-node-0 : ok=141  changed=36  unreachable=0 failed=0 skipped=135  rescued=0 ignored=0 2025-10-08 15:49:51.222644 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2025-10-08 15:49:51.222649 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2025-10-08 15:49:51.222654 | orchestrator | testbed-node-3 : ok=186  changed=44  unreachable=0 failed=0 skipped=152  rescued=0 ignored=0 2025-10-08 15:49:51.222658 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2025-10-08 15:49:51.222663 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2025-10-08 15:49:51.222667 | orchestrator | 2025-10-08 15:49:51.222672 | orchestrator | 2025-10-08 15:49:51.222676 | orchestrator | 2025-10-08 15:49:51.222681 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-08 15:49:51.222685 | orchestrator | Wednesday 08 October 2025 15:49:48 +0000 (0:00:00.254) 0:10:45.646 ***** 2025-10-08 15:49:51.222690 | orchestrator | =============================================================================== 2025-10-08 15:49:51.222694 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 51.48s 2025-10-08 15:49:51.222699 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 41.04s 2025-10-08 15:49:51.222709 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 31.67s 2025-10-08 15:49:51.222714 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 24.36s 2025-10-08 15:49:51.222719 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 15.33s 2025-10-08 15:49:51.222725 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.64s 2025-10-08 15:49:51.222730 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 10.44s 2025-10-08 15:49:51.222735 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 9.17s 2025-10-08 15:49:51.222739 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 7.91s 2025-10-08 15:49:51.222744 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 6.72s 2025-10-08 15:49:51.222748 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.59s 2025-10-08 15:49:51.222753 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 5.07s 2025-10-08 15:49:51.222757 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.85s 2025-10-08 15:49:51.222761 | orchestrator | ceph-config : Generate Ceph file ---------------------------------------- 4.01s 2025-10-08 15:49:51.222766 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 3.90s 2025-10-08 15:49:51.222770 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 3.61s 2025-10-08 15:49:51.222775 | orchestrator | ceph-osd : Unset noup flag ---------------------------------------------- 3.59s 2025-10-08 15:49:51.222779 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 3.57s 2025-10-08 15:49:51.222784 | orchestrator | ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created --- 3.49s 2025-10-08 15:49:51.222788 | orchestrator | ceph-mon : Copy admin keyring over to mons ------------------------------ 3.49s 2025-10-08 15:49:51.222793 | orchestrator | 2025-10-08 15:49:51 | INFO  | Task 5c850dda-c821-4643-b1aa-fc211c3021bc is in state STARTED 2025-10-08 15:49:51.222797 | orchestrator | 2025-10-08 15:49:51 | INFO  | Task 1aac30e0-8d96-46d0-8b24-1d528accbfcc is in state STARTED 2025-10-08 15:49:51.222802 | orchestrator | 2025-10-08 15:49:51 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:49:54.261796 | orchestrator | 2025-10-08 15:49:54 | INFO  | Task c525e092-38d1-48a7-9760-fda7f6b63900 is in state STARTED 2025-10-08 15:49:54.263023 | orchestrator | 2025-10-08 15:49:54 | INFO  | Task 5c850dda-c821-4643-b1aa-fc211c3021bc is in state STARTED 2025-10-08 15:49:54.264974 | orchestrator | 2025-10-08 15:49:54 | INFO  | Task 1aac30e0-8d96-46d0-8b24-1d528accbfcc is in state STARTED 2025-10-08 15:49:54.265223 | orchestrator | 2025-10-08 15:49:54 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:49:57.315451 | orchestrator | 2025-10-08 15:49:57 | INFO  | Task c525e092-38d1-48a7-9760-fda7f6b63900 is in state STARTED 2025-10-08 15:49:57.316753 | orchestrator | 2025-10-08 15:49:57 | INFO  | Task 5c850dda-c821-4643-b1aa-fc211c3021bc is in state STARTED 2025-10-08 15:49:57.318584 | orchestrator | 2025-10-08 15:49:57 | INFO  | Task 1aac30e0-8d96-46d0-8b24-1d528accbfcc is in state STARTED 2025-10-08 15:49:57.318609 | orchestrator | 2025-10-08 15:49:57 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:50:00.372145 | orchestrator | 2025-10-08 15:50:00 | INFO  | Task c525e092-38d1-48a7-9760-fda7f6b63900 is in state STARTED 2025-10-08 15:50:00.374622 | orchestrator | 2025-10-08 15:50:00 | INFO  | Task 5c850dda-c821-4643-b1aa-fc211c3021bc is in state STARTED 2025-10-08 15:50:00.375759 | orchestrator | 2025-10-08 15:50:00 | INFO  | Task 1aac30e0-8d96-46d0-8b24-1d528accbfcc is in state STARTED 2025-10-08 15:50:00.375842 | orchestrator | 2025-10-08 15:50:00 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:50:03.438126 | orchestrator | 2025-10-08 15:50:03 | INFO  | Task c525e092-38d1-48a7-9760-fda7f6b63900 is in state STARTED 2025-10-08 15:50:03.438826 | orchestrator | 2025-10-08 15:50:03 | INFO  | Task 5c850dda-c821-4643-b1aa-fc211c3021bc is in state STARTED 2025-10-08 15:50:03.440193 | orchestrator | 2025-10-08 15:50:03 | INFO  | Task 1aac30e0-8d96-46d0-8b24-1d528accbfcc is in state STARTED 2025-10-08 15:50:03.440372 | orchestrator | 2025-10-08 15:50:03 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:50:06.507632 | orchestrator | 2025-10-08 15:50:06 | INFO  | Task c525e092-38d1-48a7-9760-fda7f6b63900 is in state STARTED 2025-10-08 15:50:06.509526 | orchestrator | 2025-10-08 15:50:06 | INFO  | Task 5c850dda-c821-4643-b1aa-fc211c3021bc is in state STARTED 2025-10-08 15:50:06.511093 | orchestrator | 2025-10-08 15:50:06 | INFO  | Task 1aac30e0-8d96-46d0-8b24-1d528accbfcc is in state STARTED 2025-10-08 15:50:06.511300 | orchestrator | 2025-10-08 15:50:06 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:50:09.557759 | orchestrator | 2025-10-08 15:50:09 | INFO  | Task c525e092-38d1-48a7-9760-fda7f6b63900 is in state STARTED 2025-10-08 15:50:09.558533 | orchestrator | 2025-10-08 15:50:09 | INFO  | Task 5c850dda-c821-4643-b1aa-fc211c3021bc is in state STARTED 2025-10-08 15:50:09.560541 | orchestrator | 2025-10-08 15:50:09 | INFO  | Task 1aac30e0-8d96-46d0-8b24-1d528accbfcc is in state STARTED 2025-10-08 15:50:09.560569 | orchestrator | 2025-10-08 15:50:09 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:50:12.610287 | orchestrator | 2025-10-08 15:50:12 | INFO  | Task c525e092-38d1-48a7-9760-fda7f6b63900 is in state STARTED 2025-10-08 15:50:12.614429 | orchestrator | 2025-10-08 15:50:12 | INFO  | Task 5c850dda-c821-4643-b1aa-fc211c3021bc is in state STARTED 2025-10-08 15:50:12.615958 | orchestrator | 2025-10-08 15:50:12 | INFO  | Task 1aac30e0-8d96-46d0-8b24-1d528accbfcc is in state STARTED 2025-10-08 15:50:12.616216 | orchestrator | 2025-10-08 15:50:12 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:50:15.658835 | orchestrator | 2025-10-08 15:50:15 | INFO  | Task c525e092-38d1-48a7-9760-fda7f6b63900 is in state STARTED 2025-10-08 15:50:15.660826 | orchestrator | 2025-10-08 15:50:15 | INFO  | Task 5c850dda-c821-4643-b1aa-fc211c3021bc is in state STARTED 2025-10-08 15:50:15.664344 | orchestrator | 2025-10-08 15:50:15 | INFO  | Task 1aac30e0-8d96-46d0-8b24-1d528accbfcc is in state STARTED 2025-10-08 15:50:15.664369 | orchestrator | 2025-10-08 15:50:15 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:50:18.705740 | orchestrator | 2025-10-08 15:50:18 | INFO  | Task c525e092-38d1-48a7-9760-fda7f6b63900 is in state STARTED 2025-10-08 15:50:18.706421 | orchestrator | 2025-10-08 15:50:18 | INFO  | Task 5c850dda-c821-4643-b1aa-fc211c3021bc is in state STARTED 2025-10-08 15:50:18.708601 | orchestrator | 2025-10-08 15:50:18 | INFO  | Task 1aac30e0-8d96-46d0-8b24-1d528accbfcc is in state STARTED 2025-10-08 15:50:18.708625 | orchestrator | 2025-10-08 15:50:18 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:50:21.759463 | orchestrator | 2025-10-08 15:50:21 | INFO  | Task c525e092-38d1-48a7-9760-fda7f6b63900 is in state STARTED 2025-10-08 15:50:21.760885 | orchestrator | 2025-10-08 15:50:21 | INFO  | Task 5c850dda-c821-4643-b1aa-fc211c3021bc is in state STARTED 2025-10-08 15:50:21.764350 | orchestrator | 2025-10-08 15:50:21 | INFO  | Task 1aac30e0-8d96-46d0-8b24-1d528accbfcc is in state STARTED 2025-10-08 15:50:21.764377 | orchestrator | 2025-10-08 15:50:21 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:50:24.814396 | orchestrator | 2025-10-08 15:50:24 | INFO  | Task c525e092-38d1-48a7-9760-fda7f6b63900 is in state STARTED 2025-10-08 15:50:24.816476 | orchestrator | 2025-10-08 15:50:24 | INFO  | Task 5c850dda-c821-4643-b1aa-fc211c3021bc is in state STARTED 2025-10-08 15:50:24.817830 | orchestrator | 2025-10-08 15:50:24 | INFO  | Task 1aac30e0-8d96-46d0-8b24-1d528accbfcc is in state STARTED 2025-10-08 15:50:24.817932 | orchestrator | 2025-10-08 15:50:24 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:50:27.862349 | orchestrator | 2025-10-08 15:50:27 | INFO  | Task c525e092-38d1-48a7-9760-fda7f6b63900 is in state STARTED 2025-10-08 15:50:27.862976 | orchestrator | 2025-10-08 15:50:27 | INFO  | Task 5c850dda-c821-4643-b1aa-fc211c3021bc is in state STARTED 2025-10-08 15:50:27.864380 | orchestrator | 2025-10-08 15:50:27 | INFO  | Task 1aac30e0-8d96-46d0-8b24-1d528accbfcc is in state STARTED 2025-10-08 15:50:27.864404 | orchestrator | 2025-10-08 15:50:27 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:50:30.909884 | orchestrator | 2025-10-08 15:50:30 | INFO  | Task c525e092-38d1-48a7-9760-fda7f6b63900 is in state STARTED 2025-10-08 15:50:30.910984 | orchestrator | 2025-10-08 15:50:30 | INFO  | Task 5c850dda-c821-4643-b1aa-fc211c3021bc is in state STARTED 2025-10-08 15:50:30.912567 | orchestrator | 2025-10-08 15:50:30 | INFO  | Task 1aac30e0-8d96-46d0-8b24-1d528accbfcc is in state STARTED 2025-10-08 15:50:30.912589 | orchestrator | 2025-10-08 15:50:30 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:50:33.958473 | orchestrator | 2025-10-08 15:50:33 | INFO  | Task c525e092-38d1-48a7-9760-fda7f6b63900 is in state STARTED 2025-10-08 15:50:33.959749 | orchestrator | 2025-10-08 15:50:33 | INFO  | Task 5c850dda-c821-4643-b1aa-fc211c3021bc is in state STARTED 2025-10-08 15:50:33.963028 | orchestrator | 2025-10-08 15:50:33 | INFO  | Task 1aac30e0-8d96-46d0-8b24-1d528accbfcc is in state STARTED 2025-10-08 15:50:33.963057 | orchestrator | 2025-10-08 15:50:33 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:50:37.013835 | orchestrator | 2025-10-08 15:50:37 | INFO  | Task c525e092-38d1-48a7-9760-fda7f6b63900 is in state STARTED 2025-10-08 15:50:37.014626 | orchestrator | 2025-10-08 15:50:37 | INFO  | Task 5c850dda-c821-4643-b1aa-fc211c3021bc is in state STARTED 2025-10-08 15:50:37.016137 | orchestrator | 2025-10-08 15:50:37 | INFO  | Task 1aac30e0-8d96-46d0-8b24-1d528accbfcc is in state STARTED 2025-10-08 15:50:37.016188 | orchestrator | 2025-10-08 15:50:37 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:50:40.070204 | orchestrator | 2025-10-08 15:50:40 | INFO  | Task c525e092-38d1-48a7-9760-fda7f6b63900 is in state STARTED 2025-10-08 15:50:40.071660 | orchestrator | 2025-10-08 15:50:40 | INFO  | Task 5c850dda-c821-4643-b1aa-fc211c3021bc is in state STARTED 2025-10-08 15:50:40.073263 | orchestrator | 2025-10-08 15:50:40 | INFO  | Task 1aac30e0-8d96-46d0-8b24-1d528accbfcc is in state STARTED 2025-10-08 15:50:40.073297 | orchestrator | 2025-10-08 15:50:40 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:50:43.123082 | orchestrator | 2025-10-08 15:50:43 | INFO  | Task c525e092-38d1-48a7-9760-fda7f6b63900 is in state STARTED 2025-10-08 15:50:43.123831 | orchestrator | 2025-10-08 15:50:43 | INFO  | Task 5c850dda-c821-4643-b1aa-fc211c3021bc is in state STARTED 2025-10-08 15:50:43.125602 | orchestrator | 2025-10-08 15:50:43 | INFO  | Task 1aac30e0-8d96-46d0-8b24-1d528accbfcc is in state STARTED 2025-10-08 15:50:43.125628 | orchestrator | 2025-10-08 15:50:43 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:50:46.171523 | orchestrator | 2025-10-08 15:50:46 | INFO  | Task c525e092-38d1-48a7-9760-fda7f6b63900 is in state STARTED 2025-10-08 15:50:46.174112 | orchestrator | 2025-10-08 15:50:46 | INFO  | Task 5c850dda-c821-4643-b1aa-fc211c3021bc is in state STARTED 2025-10-08 15:50:46.175729 | orchestrator | 2025-10-08 15:50:46 | INFO  | Task 1aac30e0-8d96-46d0-8b24-1d528accbfcc is in state STARTED 2025-10-08 15:50:46.175753 | orchestrator | 2025-10-08 15:50:46 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:50:49.224140 | orchestrator | 2025-10-08 15:50:49 | INFO  | Task c525e092-38d1-48a7-9760-fda7f6b63900 is in state STARTED 2025-10-08 15:50:49.225420 | orchestrator | 2025-10-08 15:50:49 | INFO  | Task 5c850dda-c821-4643-b1aa-fc211c3021bc is in state STARTED 2025-10-08 15:50:49.227994 | orchestrator | 2025-10-08 15:50:49 | INFO  | Task 1aac30e0-8d96-46d0-8b24-1d528accbfcc is in state STARTED 2025-10-08 15:50:49.228576 | orchestrator | 2025-10-08 15:50:49 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:50:52.275039 | orchestrator | 2025-10-08 15:50:52 | INFO  | Task c525e092-38d1-48a7-9760-fda7f6b63900 is in state STARTED 2025-10-08 15:50:52.275766 | orchestrator | 2025-10-08 15:50:52 | INFO  | Task 5c850dda-c821-4643-b1aa-fc211c3021bc is in state STARTED 2025-10-08 15:50:52.277706 | orchestrator | 2025-10-08 15:50:52 | INFO  | Task 1aac30e0-8d96-46d0-8b24-1d528accbfcc is in state STARTED 2025-10-08 15:50:52.277736 | orchestrator | 2025-10-08 15:50:52 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:50:55.324373 | orchestrator | 2025-10-08 15:50:55 | INFO  | Task c525e092-38d1-48a7-9760-fda7f6b63900 is in state STARTED 2025-10-08 15:50:55.324515 | orchestrator | 2025-10-08 15:50:55 | INFO  | Task 5c850dda-c821-4643-b1aa-fc211c3021bc is in state STARTED 2025-10-08 15:50:55.325304 | orchestrator | 2025-10-08 15:50:55 | INFO  | Task 1aac30e0-8d96-46d0-8b24-1d528accbfcc is in state STARTED 2025-10-08 15:50:55.325330 | orchestrator | 2025-10-08 15:50:55 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:50:58.378948 | orchestrator | 2025-10-08 15:50:58 | INFO  | Task c525e092-38d1-48a7-9760-fda7f6b63900 is in state STARTED 2025-10-08 15:50:58.380734 | orchestrator | 2025-10-08 15:50:58 | INFO  | Task 5c850dda-c821-4643-b1aa-fc211c3021bc is in state STARTED 2025-10-08 15:50:58.382423 | orchestrator | 2025-10-08 15:50:58 | INFO  | Task 1aac30e0-8d96-46d0-8b24-1d528accbfcc is in state STARTED 2025-10-08 15:50:58.382450 | orchestrator | 2025-10-08 15:50:58 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:51:01.439139 | orchestrator | 2025-10-08 15:51:01 | INFO  | Task c525e092-38d1-48a7-9760-fda7f6b63900 is in state SUCCESS 2025-10-08 15:51:01.440942 | orchestrator | 2025-10-08 15:51:01.440988 | orchestrator | 2025-10-08 15:51:01.441001 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-10-08 15:51:01.441013 | orchestrator | 2025-10-08 15:51:01.441023 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-10-08 15:51:01.441034 | orchestrator | Wednesday 08 October 2025 15:48:03 +0000 (0:00:00.254) 0:00:00.254 ***** 2025-10-08 15:51:01.441045 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:51:01.441057 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:51:01.441067 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:51:01.441077 | orchestrator | 2025-10-08 15:51:01.441093 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-10-08 15:51:01.441104 | orchestrator | Wednesday 08 October 2025 15:48:04 +0000 (0:00:00.293) 0:00:00.548 ***** 2025-10-08 15:51:01.441115 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2025-10-08 15:51:01.441134 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2025-10-08 15:51:01.441144 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2025-10-08 15:51:01.441154 | orchestrator | 2025-10-08 15:51:01.441186 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2025-10-08 15:51:01.441248 | orchestrator | 2025-10-08 15:51:01.441258 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-10-08 15:51:01.441268 | orchestrator | Wednesday 08 October 2025 15:48:04 +0000 (0:00:00.420) 0:00:00.968 ***** 2025-10-08 15:51:01.441278 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-08 15:51:01.441288 | orchestrator | 2025-10-08 15:51:01.441297 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2025-10-08 15:51:01.441307 | orchestrator | Wednesday 08 October 2025 15:48:05 +0000 (0:00:00.512) 0:00:01.481 ***** 2025-10-08 15:51:01.441317 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-10-08 15:51:01.441326 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-10-08 15:51:01.441336 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-10-08 15:51:01.441346 | orchestrator | 2025-10-08 15:51:01.441355 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2025-10-08 15:51:01.441365 | orchestrator | Wednesday 08 October 2025 15:48:06 +0000 (0:00:01.695) 0:00:03.176 ***** 2025-10-08 15:51:01.441378 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-10-08 15:51:01.441393 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-10-08 15:51:01.441418 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-10-08 15:51:01.441438 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-10-08 15:51:01.441457 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-10-08 15:51:01.441469 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-10-08 15:51:01.441480 | orchestrator | 2025-10-08 15:51:01.441490 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-10-08 15:51:01.441500 | orchestrator | Wednesday 08 October 2025 15:48:08 +0000 (0:00:01.760) 0:00:04.937 ***** 2025-10-08 15:51:01.441510 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-08 15:51:01.441520 | orchestrator | 2025-10-08 15:51:01.441530 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2025-10-08 15:51:01.441539 | orchestrator | Wednesday 08 October 2025 15:48:09 +0000 (0:00:00.533) 0:00:05.470 ***** 2025-10-08 15:51:01.441558 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-10-08 15:51:01.441578 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-10-08 15:51:01.441590 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-10-08 15:51:01.441603 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-10-08 15:51:01.441621 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-10-08 15:51:01.441670 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-10-08 15:51:01.441683 | orchestrator | 2025-10-08 15:51:01.441694 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2025-10-08 15:51:01.441705 | orchestrator | Wednesday 08 October 2025 15:48:11 +0000 (0:00:02.870) 0:00:08.341 ***** 2025-10-08 15:51:01.441716 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-10-08 15:51:01.441729 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-10-08 15:51:01.441740 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:51:01.441753 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-10-08 15:51:01.441782 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-10-08 15:51:01.441795 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:51:01.441806 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-10-08 15:51:01.441819 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-10-08 15:51:01.441830 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:51:01.441841 | orchestrator | 2025-10-08 15:51:01.441852 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2025-10-08 15:51:01.441863 | orchestrator | Wednesday 08 October 2025 15:48:13 +0000 (0:00:01.023) 0:00:09.365 ***** 2025-10-08 15:51:01.441874 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-10-08 15:51:01.441902 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-10-08 15:51:01.441914 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:51:01.441926 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-10-08 15:51:01.441937 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-10-08 15:51:01.441947 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:51:01.441957 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-10-08 15:51:01.441985 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-10-08 15:51:01.441996 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:51:01.442006 | orchestrator | 2025-10-08 15:51:01.442085 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2025-10-08 15:51:01.442099 | orchestrator | Wednesday 08 October 2025 15:48:13 +0000 (0:00:00.853) 0:00:10.218 ***** 2025-10-08 15:51:01.442110 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-10-08 15:51:01.442120 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-10-08 15:51:01.442131 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-10-08 15:51:01.442162 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-10-08 15:51:01.442195 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-10-08 15:51:01.442206 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-10-08 15:51:01.442217 | orchestrator | 2025-10-08 15:51:01.442227 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2025-10-08 15:51:01.442237 | orchestrator | Wednesday 08 October 2025 15:48:16 +0000 (0:00:02.586) 0:00:12.805 ***** 2025-10-08 15:51:01.442247 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:51:01.442257 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:51:01.442267 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:51:01.442284 | orchestrator | 2025-10-08 15:51:01.442294 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2025-10-08 15:51:01.442303 | orchestrator | Wednesday 08 October 2025 15:48:19 +0000 (0:00:02.886) 0:00:15.692 ***** 2025-10-08 15:51:01.442313 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:51:01.442323 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:51:01.442435 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:51:01.442445 | orchestrator | 2025-10-08 15:51:01.442455 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2025-10-08 15:51:01.442465 | orchestrator | Wednesday 08 October 2025 15:48:21 +0000 (0:00:02.275) 0:00:17.967 ***** 2025-10-08 15:51:01.442475 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-10-08 15:51:01.442499 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-10-08 15:51:01.442510 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-10-08 15:51:01.442521 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-10-08 15:51:01.442539 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-10-08 15:51:01.442562 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-10-08 15:51:01.442573 | orchestrator | 2025-10-08 15:51:01.442583 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-10-08 15:51:01.442593 | orchestrator | Wednesday 08 October 2025 15:48:23 +0000 (0:00:01.997) 0:00:19.964 ***** 2025-10-08 15:51:01.442603 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:51:01.442612 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:51:01.442622 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:51:01.442632 | orchestrator | 2025-10-08 15:51:01.442642 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-10-08 15:51:01.442651 | orchestrator | Wednesday 08 October 2025 15:48:23 +0000 (0:00:00.297) 0:00:20.262 ***** 2025-10-08 15:51:01.442661 | orchestrator | 2025-10-08 15:51:01.442670 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-10-08 15:51:01.442680 | orchestrator | Wednesday 08 October 2025 15:48:23 +0000 (0:00:00.083) 0:00:20.345 ***** 2025-10-08 15:51:01.442690 | orchestrator | 2025-10-08 15:51:01.442700 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-10-08 15:51:01.442709 | orchestrator | Wednesday 08 October 2025 15:48:24 +0000 (0:00:00.066) 0:00:20.412 ***** 2025-10-08 15:51:01.442719 | orchestrator | 2025-10-08 15:51:01.442729 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2025-10-08 15:51:01.442738 | orchestrator | Wednesday 08 October 2025 15:48:24 +0000 (0:00:00.070) 0:00:20.483 ***** 2025-10-08 15:51:01.442748 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:51:01.442758 | orchestrator | 2025-10-08 15:51:01.442767 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2025-10-08 15:51:01.442783 | orchestrator | Wednesday 08 October 2025 15:48:24 +0000 (0:00:00.211) 0:00:20.694 ***** 2025-10-08 15:51:01.442792 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:51:01.442802 | orchestrator | 2025-10-08 15:51:01.442812 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2025-10-08 15:51:01.442821 | orchestrator | Wednesday 08 October 2025 15:48:25 +0000 (0:00:00.775) 0:00:21.470 ***** 2025-10-08 15:51:01.442831 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:51:01.442841 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:51:01.442850 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:51:01.442860 | orchestrator | 2025-10-08 15:51:01.442870 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2025-10-08 15:51:01.442880 | orchestrator | Wednesday 08 October 2025 15:49:27 +0000 (0:01:02.836) 0:01:24.306 ***** 2025-10-08 15:51:01.442889 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:51:01.442899 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:51:01.442909 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:51:01.442918 | orchestrator | 2025-10-08 15:51:01.442928 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-10-08 15:51:01.442938 | orchestrator | Wednesday 08 October 2025 15:50:47 +0000 (0:01:19.726) 0:02:44.033 ***** 2025-10-08 15:51:01.442947 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-08 15:51:01.442957 | orchestrator | 2025-10-08 15:51:01.442967 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2025-10-08 15:51:01.442977 | orchestrator | Wednesday 08 October 2025 15:50:48 +0000 (0:00:00.757) 0:02:44.791 ***** 2025-10-08 15:51:01.442986 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:51:01.442996 | orchestrator | 2025-10-08 15:51:01.443006 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2025-10-08 15:51:01.443016 | orchestrator | Wednesday 08 October 2025 15:50:51 +0000 (0:00:02.576) 0:02:47.367 ***** 2025-10-08 15:51:01.443025 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:51:01.443035 | orchestrator | 2025-10-08 15:51:01.443045 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2025-10-08 15:51:01.443056 | orchestrator | Wednesday 08 October 2025 15:50:53 +0000 (0:00:02.427) 0:02:49.795 ***** 2025-10-08 15:51:01.443068 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:51:01.443079 | orchestrator | 2025-10-08 15:51:01.443090 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2025-10-08 15:51:01.443101 | orchestrator | Wednesday 08 October 2025 15:50:56 +0000 (0:00:02.828) 0:02:52.623 ***** 2025-10-08 15:51:01.443111 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:51:01.443122 | orchestrator | 2025-10-08 15:51:01.443133 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-08 15:51:01.443145 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-10-08 15:51:01.443158 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-10-08 15:51:01.443228 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-10-08 15:51:01.443240 | orchestrator | 2025-10-08 15:51:01.443251 | orchestrator | 2025-10-08 15:51:01.443262 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-08 15:51:01.443278 | orchestrator | Wednesday 08 October 2025 15:50:58 +0000 (0:00:02.506) 0:02:55.129 ***** 2025-10-08 15:51:01.443290 | orchestrator | =============================================================================== 2025-10-08 15:51:01.443301 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 79.73s 2025-10-08 15:51:01.443312 | orchestrator | opensearch : Restart opensearch container ------------------------------ 62.84s 2025-10-08 15:51:01.443323 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 2.89s 2025-10-08 15:51:01.443344 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.87s 2025-10-08 15:51:01.443356 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.83s 2025-10-08 15:51:01.443371 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.59s 2025-10-08 15:51:01.443383 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.58s 2025-10-08 15:51:01.443394 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.51s 2025-10-08 15:51:01.443405 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.43s 2025-10-08 15:51:01.443414 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 2.28s 2025-10-08 15:51:01.443424 | orchestrator | opensearch : Check opensearch containers -------------------------------- 2.00s 2025-10-08 15:51:01.443433 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.76s 2025-10-08 15:51:01.443443 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 1.70s 2025-10-08 15:51:01.443453 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.02s 2025-10-08 15:51:01.443463 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 0.85s 2025-10-08 15:51:01.443472 | orchestrator | opensearch : Perform a flush -------------------------------------------- 0.78s 2025-10-08 15:51:01.443482 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.76s 2025-10-08 15:51:01.443492 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.53s 2025-10-08 15:51:01.443501 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.51s 2025-10-08 15:51:01.443511 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.42s 2025-10-08 15:51:01.443520 | orchestrator | 2025-10-08 15:51:01 | INFO  | Task 5c850dda-c821-4643-b1aa-fc211c3021bc is in state STARTED 2025-10-08 15:51:01.443531 | orchestrator | 2025-10-08 15:51:01 | INFO  | Task 1aac30e0-8d96-46d0-8b24-1d528accbfcc is in state STARTED 2025-10-08 15:51:01.443541 | orchestrator | 2025-10-08 15:51:01 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:51:04.494604 | orchestrator | 2025-10-08 15:51:04 | INFO  | Task 5c850dda-c821-4643-b1aa-fc211c3021bc is in state STARTED 2025-10-08 15:51:04.497745 | orchestrator | 2025-10-08 15:51:04 | INFO  | Task 1aac30e0-8d96-46d0-8b24-1d528accbfcc is in state STARTED 2025-10-08 15:51:04.497786 | orchestrator | 2025-10-08 15:51:04 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:51:07.543338 | orchestrator | 2025-10-08 15:51:07 | INFO  | Task 5c850dda-c821-4643-b1aa-fc211c3021bc is in state STARTED 2025-10-08 15:51:07.544869 | orchestrator | 2025-10-08 15:51:07 | INFO  | Task 1aac30e0-8d96-46d0-8b24-1d528accbfcc is in state STARTED 2025-10-08 15:51:07.544896 | orchestrator | 2025-10-08 15:51:07 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:51:10.583271 | orchestrator | 2025-10-08 15:51:10 | INFO  | Task 5c850dda-c821-4643-b1aa-fc211c3021bc is in state STARTED 2025-10-08 15:51:10.584508 | orchestrator | 2025-10-08 15:51:10 | INFO  | Task 1aac30e0-8d96-46d0-8b24-1d528accbfcc is in state STARTED 2025-10-08 15:51:10.584536 | orchestrator | 2025-10-08 15:51:10 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:51:13.617266 | orchestrator | 2025-10-08 15:51:13 | INFO  | Task 5c850dda-c821-4643-b1aa-fc211c3021bc is in state STARTED 2025-10-08 15:51:13.618195 | orchestrator | 2025-10-08 15:51:13 | INFO  | Task 1aac30e0-8d96-46d0-8b24-1d528accbfcc is in state SUCCESS 2025-10-08 15:51:13.619514 | orchestrator | 2025-10-08 15:51:13 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:51:13.621014 | orchestrator | 2025-10-08 15:51:13.621044 | orchestrator | 2025-10-08 15:51:13.621082 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2025-10-08 15:51:13.621093 | orchestrator | 2025-10-08 15:51:13.621103 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-10-08 15:51:13.621114 | orchestrator | Wednesday 08 October 2025 15:48:03 +0000 (0:00:00.089) 0:00:00.089 ***** 2025-10-08 15:51:13.621123 | orchestrator | ok: [localhost] => { 2025-10-08 15:51:13.621136 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2025-10-08 15:51:13.621146 | orchestrator | } 2025-10-08 15:51:13.621156 | orchestrator | 2025-10-08 15:51:13.621194 | orchestrator | TASK [Check MariaDB service] *************************************************** 2025-10-08 15:51:13.621205 | orchestrator | Wednesday 08 October 2025 15:48:03 +0000 (0:00:00.054) 0:00:00.144 ***** 2025-10-08 15:51:13.621216 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2025-10-08 15:51:13.621228 | orchestrator | ...ignoring 2025-10-08 15:51:13.621238 | orchestrator | 2025-10-08 15:51:13.621248 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2025-10-08 15:51:13.621258 | orchestrator | Wednesday 08 October 2025 15:48:06 +0000 (0:00:02.843) 0:00:02.988 ***** 2025-10-08 15:51:13.621268 | orchestrator | skipping: [localhost] 2025-10-08 15:51:13.621279 | orchestrator | 2025-10-08 15:51:13.621289 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2025-10-08 15:51:13.621298 | orchestrator | Wednesday 08 October 2025 15:48:06 +0000 (0:00:00.056) 0:00:03.045 ***** 2025-10-08 15:51:13.621308 | orchestrator | ok: [localhost] 2025-10-08 15:51:13.621318 | orchestrator | 2025-10-08 15:51:13.621327 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-10-08 15:51:13.621337 | orchestrator | 2025-10-08 15:51:13.621363 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-10-08 15:51:13.621373 | orchestrator | Wednesday 08 October 2025 15:48:06 +0000 (0:00:00.154) 0:00:03.199 ***** 2025-10-08 15:51:13.621383 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:51:13.621393 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:51:13.621402 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:51:13.621412 | orchestrator | 2025-10-08 15:51:13.621421 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-10-08 15:51:13.621431 | orchestrator | Wednesday 08 October 2025 15:48:07 +0000 (0:00:00.322) 0:00:03.521 ***** 2025-10-08 15:51:13.621441 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-10-08 15:51:13.621452 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-10-08 15:51:13.621461 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-10-08 15:51:13.621471 | orchestrator | 2025-10-08 15:51:13.621480 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-10-08 15:51:13.621490 | orchestrator | 2025-10-08 15:51:13.621500 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-10-08 15:51:13.621509 | orchestrator | Wednesday 08 October 2025 15:48:07 +0000 (0:00:00.741) 0:00:04.263 ***** 2025-10-08 15:51:13.621519 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-10-08 15:51:13.621529 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-10-08 15:51:13.621539 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-10-08 15:51:13.621548 | orchestrator | 2025-10-08 15:51:13.621558 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-10-08 15:51:13.621567 | orchestrator | Wednesday 08 October 2025 15:48:08 +0000 (0:00:00.402) 0:00:04.666 ***** 2025-10-08 15:51:13.621577 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-08 15:51:13.621589 | orchestrator | 2025-10-08 15:51:13.621598 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2025-10-08 15:51:13.621610 | orchestrator | Wednesday 08 October 2025 15:48:08 +0000 (0:00:00.532) 0:00:05.199 ***** 2025-10-08 15:51:13.621658 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-10-08 15:51:13.621682 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-10-08 15:51:13.621696 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-10-08 15:51:13.621715 | orchestrator | 2025-10-08 15:51:13.621732 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2025-10-08 15:51:13.621744 | orchestrator | Wednesday 08 October 2025 15:48:11 +0000 (0:00:03.054) 0:00:08.254 ***** 2025-10-08 15:51:13.621755 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:51:13.621767 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:51:13.621778 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:51:13.621788 | orchestrator | 2025-10-08 15:51:13.621799 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2025-10-08 15:51:13.621809 | orchestrator | Wednesday 08 October 2025 15:48:12 +0000 (0:00:00.620) 0:00:08.874 ***** 2025-10-08 15:51:13.621820 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:51:13.621831 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:51:13.621842 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:51:13.621852 | orchestrator | 2025-10-08 15:51:13.621862 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2025-10-08 15:51:13.621874 | orchestrator | Wednesday 08 October 2025 15:48:13 +0000 (0:00:01.429) 0:00:10.303 ***** 2025-10-08 15:51:13.621890 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-10-08 15:51:13.621919 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-10-08 15:51:13.621938 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-10-08 15:51:13.621951 | orchestrator | 2025-10-08 15:51:13.621963 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2025-10-08 15:51:13.621980 | orchestrator | Wednesday 08 October 2025 15:48:17 +0000 (0:00:03.775) 0:00:14.079 ***** 2025-10-08 15:51:13.621990 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:51:13.622000 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:51:13.622010 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:51:13.622070 | orchestrator | 2025-10-08 15:51:13.622081 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2025-10-08 15:51:13.622090 | orchestrator | Wednesday 08 October 2025 15:48:18 +0000 (0:00:01.203) 0:00:15.283 ***** 2025-10-08 15:51:13.622100 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:51:13.622110 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:51:13.622120 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:51:13.622129 | orchestrator | 2025-10-08 15:51:13.622139 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-10-08 15:51:13.622149 | orchestrator | Wednesday 08 October 2025 15:48:23 +0000 (0:00:04.311) 0:00:19.594 ***** 2025-10-08 15:51:13.622158 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-08 15:51:13.622194 | orchestrator | 2025-10-08 15:51:13.622204 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-10-08 15:51:13.622213 | orchestrator | Wednesday 08 October 2025 15:48:23 +0000 (0:00:00.651) 0:00:20.245 ***** 2025-10-08 15:51:13.622234 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-10-08 15:51:13.622246 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:51:13.622262 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-10-08 15:51:13.622281 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:51:13.622299 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-10-08 15:51:13.622310 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:51:13.622320 | orchestrator | 2025-10-08 15:51:13.622330 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-10-08 15:51:13.622339 | orchestrator | Wednesday 08 October 2025 15:48:27 +0000 (0:00:03.726) 0:00:23.972 ***** 2025-10-08 15:51:13.622354 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-10-08 15:51:13.622372 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:51:13.622388 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-10-08 15:51:13.622399 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:51:13.622415 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-10-08 15:51:13.622439 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:51:13.622449 | orchestrator | 2025-10-08 15:51:13.622459 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-10-08 15:51:13.622469 | orchestrator | Wednesday 08 October 2025 15:48:30 +0000 (0:00:02.824) 0:00:26.797 ***** 2025-10-08 15:51:13.622479 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-10-08 15:51:13.622490 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:51:13.622513 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-10-08 15:51:13.622532 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:51:13.622542 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-10-08 15:51:13.622553 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:51:13.622563 | orchestrator | 2025-10-08 15:51:13.622572 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2025-10-08 15:51:13.622582 | orchestrator | Wednesday 08 October 2025 15:48:33 +0000 (0:00:02.859) 0:00:29.657 ***** 2025-10-08 15:51:13.622605 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-10-08 15:51:13.622624 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-10-08 15:51:13.622644 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-10-08 15:51:13.622664 | orchestrator | 2025-10-08 15:51:13.622674 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2025-10-08 15:51:13.622684 | orchestrator | Wednesday 08 October 2025 15:48:36 +0000 (0:00:03.448) 0:00:33.105 ***** 2025-10-08 15:51:13.622694 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:51:13.622708 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:51:13.622718 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:51:13.622728 | orchestrator | 2025-10-08 15:51:13.622738 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2025-10-08 15:51:13.622747 | orchestrator | Wednesday 08 October 2025 15:48:37 +0000 (0:00:00.946) 0:00:34.051 ***** 2025-10-08 15:51:13.622757 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:51:13.622767 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:51:13.622777 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:51:13.622786 | orchestrator | 2025-10-08 15:51:13.622796 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2025-10-08 15:51:13.622806 | orchestrator | Wednesday 08 October 2025 15:48:38 +0000 (0:00:00.574) 0:00:34.626 ***** 2025-10-08 15:51:13.622816 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:51:13.622826 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:51:13.622835 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:51:13.622845 | orchestrator | 2025-10-08 15:51:13.622855 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2025-10-08 15:51:13.622865 | orchestrator | Wednesday 08 October 2025 15:48:38 +0000 (0:00:00.329) 0:00:34.955 ***** 2025-10-08 15:51:13.622876 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2025-10-08 15:51:13.622886 | orchestrator | ...ignoring 2025-10-08 15:51:13.622896 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2025-10-08 15:51:13.622906 | orchestrator | ...ignoring 2025-10-08 15:51:13.622915 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2025-10-08 15:51:13.622925 | orchestrator | ...ignoring 2025-10-08 15:51:13.622935 | orchestrator | 2025-10-08 15:51:13.622945 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2025-10-08 15:51:13.622954 | orchestrator | Wednesday 08 October 2025 15:48:49 +0000 (0:00:10.969) 0:00:45.925 ***** 2025-10-08 15:51:13.622964 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:51:13.622974 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:51:13.622983 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:51:13.622993 | orchestrator | 2025-10-08 15:51:13.623003 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2025-10-08 15:51:13.623012 | orchestrator | Wednesday 08 October 2025 15:48:49 +0000 (0:00:00.472) 0:00:46.397 ***** 2025-10-08 15:51:13.623022 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:51:13.623032 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:51:13.623042 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:51:13.623051 | orchestrator | 2025-10-08 15:51:13.623061 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2025-10-08 15:51:13.623070 | orchestrator | Wednesday 08 October 2025 15:48:50 +0000 (0:00:00.737) 0:00:47.135 ***** 2025-10-08 15:51:13.623080 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:51:13.623090 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:51:13.623100 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:51:13.623109 | orchestrator | 2025-10-08 15:51:13.623119 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2025-10-08 15:51:13.623136 | orchestrator | Wednesday 08 October 2025 15:48:51 +0000 (0:00:00.500) 0:00:47.635 ***** 2025-10-08 15:51:13.623146 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:51:13.623155 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:51:13.623180 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:51:13.623190 | orchestrator | 2025-10-08 15:51:13.623200 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2025-10-08 15:51:13.623209 | orchestrator | Wednesday 08 October 2025 15:48:51 +0000 (0:00:00.424) 0:00:48.059 ***** 2025-10-08 15:51:13.623219 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:51:13.623229 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:51:13.623238 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:51:13.623248 | orchestrator | 2025-10-08 15:51:13.623258 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2025-10-08 15:51:13.623267 | orchestrator | Wednesday 08 October 2025 15:48:52 +0000 (0:00:00.432) 0:00:48.491 ***** 2025-10-08 15:51:13.623283 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:51:13.623293 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:51:13.623303 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:51:13.623312 | orchestrator | 2025-10-08 15:51:13.623322 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-10-08 15:51:13.623332 | orchestrator | Wednesday 08 October 2025 15:48:52 +0000 (0:00:00.662) 0:00:49.154 ***** 2025-10-08 15:51:13.623342 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:51:13.623352 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:51:13.623361 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2025-10-08 15:51:13.623371 | orchestrator | 2025-10-08 15:51:13.623381 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2025-10-08 15:51:13.623391 | orchestrator | Wednesday 08 October 2025 15:48:53 +0000 (0:00:00.415) 0:00:49.570 ***** 2025-10-08 15:51:13.623401 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:51:13.623410 | orchestrator | 2025-10-08 15:51:13.623420 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2025-10-08 15:51:13.623429 | orchestrator | Wednesday 08 October 2025 15:49:03 +0000 (0:00:10.489) 0:01:00.059 ***** 2025-10-08 15:51:13.623439 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:51:13.623449 | orchestrator | 2025-10-08 15:51:13.623458 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-10-08 15:51:13.623468 | orchestrator | Wednesday 08 October 2025 15:49:03 +0000 (0:00:00.149) 0:01:00.208 ***** 2025-10-08 15:51:13.623478 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:51:13.623488 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:51:13.623497 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:51:13.623507 | orchestrator | 2025-10-08 15:51:13.623517 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2025-10-08 15:51:13.623526 | orchestrator | Wednesday 08 October 2025 15:49:04 +0000 (0:00:01.082) 0:01:01.291 ***** 2025-10-08 15:51:13.623541 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:51:13.623551 | orchestrator | 2025-10-08 15:51:13.623560 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2025-10-08 15:51:13.623570 | orchestrator | Wednesday 08 October 2025 15:49:13 +0000 (0:00:08.371) 0:01:09.663 ***** 2025-10-08 15:51:13.623579 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:51:13.623589 | orchestrator | 2025-10-08 15:51:13.623599 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2025-10-08 15:51:13.623608 | orchestrator | Wednesday 08 October 2025 15:49:14 +0000 (0:00:01.684) 0:01:11.347 ***** 2025-10-08 15:51:13.623618 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:51:13.623628 | orchestrator | 2025-10-08 15:51:13.623637 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2025-10-08 15:51:13.623647 | orchestrator | Wednesday 08 October 2025 15:49:17 +0000 (0:00:02.547) 0:01:13.895 ***** 2025-10-08 15:51:13.623657 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:51:13.623674 | orchestrator | 2025-10-08 15:51:13.623684 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2025-10-08 15:51:13.623694 | orchestrator | Wednesday 08 October 2025 15:49:17 +0000 (0:00:00.151) 0:01:14.047 ***** 2025-10-08 15:51:13.623703 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:51:13.623713 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:51:13.623723 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:51:13.623732 | orchestrator | 2025-10-08 15:51:13.623742 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2025-10-08 15:51:13.623752 | orchestrator | Wednesday 08 October 2025 15:49:17 +0000 (0:00:00.344) 0:01:14.392 ***** 2025-10-08 15:51:13.623761 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:51:13.623771 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-10-08 15:51:13.623781 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:51:13.623791 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:51:13.623801 | orchestrator | 2025-10-08 15:51:13.623810 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-10-08 15:51:13.623820 | orchestrator | skipping: no hosts matched 2025-10-08 15:51:13.623830 | orchestrator | 2025-10-08 15:51:13.623839 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-10-08 15:51:13.623849 | orchestrator | 2025-10-08 15:51:13.623859 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-10-08 15:51:13.623868 | orchestrator | Wednesday 08 October 2025 15:49:18 +0000 (0:00:00.565) 0:01:14.957 ***** 2025-10-08 15:51:13.623878 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:51:13.623888 | orchestrator | 2025-10-08 15:51:13.623898 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-10-08 15:51:13.623908 | orchestrator | Wednesday 08 October 2025 15:49:35 +0000 (0:00:16.784) 0:01:31.742 ***** 2025-10-08 15:51:13.623917 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:51:13.623927 | orchestrator | 2025-10-08 15:51:13.623937 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-10-08 15:51:13.623946 | orchestrator | Wednesday 08 October 2025 15:49:56 +0000 (0:00:20.732) 0:01:52.474 ***** 2025-10-08 15:51:13.623956 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:51:13.623965 | orchestrator | 2025-10-08 15:51:13.623975 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-10-08 15:51:13.623985 | orchestrator | 2025-10-08 15:51:13.623994 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-10-08 15:51:13.624004 | orchestrator | Wednesday 08 October 2025 15:49:58 +0000 (0:00:02.456) 0:01:54.930 ***** 2025-10-08 15:51:13.624014 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:51:13.624023 | orchestrator | 2025-10-08 15:51:13.624033 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-10-08 15:51:13.624043 | orchestrator | Wednesday 08 October 2025 15:50:16 +0000 (0:00:18.090) 0:02:13.021 ***** 2025-10-08 15:51:13.624052 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:51:13.624062 | orchestrator | 2025-10-08 15:51:13.624072 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-10-08 15:51:13.624081 | orchestrator | Wednesday 08 October 2025 15:50:37 +0000 (0:00:20.588) 0:02:33.610 ***** 2025-10-08 15:51:13.624091 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:51:13.624101 | orchestrator | 2025-10-08 15:51:13.624110 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-10-08 15:51:13.624120 | orchestrator | 2025-10-08 15:51:13.624134 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-10-08 15:51:13.624144 | orchestrator | Wednesday 08 October 2025 15:50:39 +0000 (0:00:02.531) 0:02:36.141 ***** 2025-10-08 15:51:13.624154 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:51:13.624206 | orchestrator | 2025-10-08 15:51:13.624218 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-10-08 15:51:13.624229 | orchestrator | Wednesday 08 October 2025 15:50:57 +0000 (0:00:17.514) 0:02:53.655 ***** 2025-10-08 15:51:13.624246 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:51:13.624256 | orchestrator | 2025-10-08 15:51:13.624266 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-10-08 15:51:13.624276 | orchestrator | Wednesday 08 October 2025 15:50:57 +0000 (0:00:00.591) 0:02:54.247 ***** 2025-10-08 15:51:13.624286 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:51:13.624295 | orchestrator | 2025-10-08 15:51:13.624305 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-10-08 15:51:13.624315 | orchestrator | 2025-10-08 15:51:13.624325 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-10-08 15:51:13.624334 | orchestrator | Wednesday 08 October 2025 15:51:00 +0000 (0:00:02.794) 0:02:57.042 ***** 2025-10-08 15:51:13.624344 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-08 15:51:13.624354 | orchestrator | 2025-10-08 15:51:13.624363 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2025-10-08 15:51:13.624373 | orchestrator | Wednesday 08 October 2025 15:51:01 +0000 (0:00:00.464) 0:02:57.506 ***** 2025-10-08 15:51:13.624383 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:51:13.624392 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:51:13.624402 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:51:13.624412 | orchestrator | 2025-10-08 15:51:13.624421 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2025-10-08 15:51:13.624436 | orchestrator | Wednesday 08 October 2025 15:51:03 +0000 (0:00:02.253) 0:02:59.760 ***** 2025-10-08 15:51:13.624446 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:51:13.624456 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:51:13.624466 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:51:13.624476 | orchestrator | 2025-10-08 15:51:13.624485 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2025-10-08 15:51:13.624495 | orchestrator | Wednesday 08 October 2025 15:51:05 +0000 (0:00:02.241) 0:03:02.002 ***** 2025-10-08 15:51:13.624505 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:51:13.624514 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:51:13.624524 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:51:13.624534 | orchestrator | 2025-10-08 15:51:13.624544 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2025-10-08 15:51:13.624554 | orchestrator | Wednesday 08 October 2025 15:51:07 +0000 (0:00:02.253) 0:03:04.256 ***** 2025-10-08 15:51:13.624563 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:51:13.624573 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:51:13.624583 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:51:13.624593 | orchestrator | 2025-10-08 15:51:13.624603 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2025-10-08 15:51:13.624613 | orchestrator | Wednesday 08 October 2025 15:51:09 +0000 (0:00:02.151) 0:03:06.407 ***** 2025-10-08 15:51:13.624622 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:51:13.624632 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:51:13.624642 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:51:13.624651 | orchestrator | 2025-10-08 15:51:13.624661 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-10-08 15:51:13.624671 | orchestrator | Wednesday 08 October 2025 15:51:12 +0000 (0:00:02.839) 0:03:09.246 ***** 2025-10-08 15:51:13.624681 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:51:13.624690 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:51:13.624700 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:51:13.624710 | orchestrator | 2025-10-08 15:51:13.624720 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-08 15:51:13.624730 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-10-08 15:51:13.624740 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2025-10-08 15:51:13.624760 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-10-08 15:51:13.624770 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-10-08 15:51:13.624780 | orchestrator | 2025-10-08 15:51:13.624789 | orchestrator | 2025-10-08 15:51:13.624799 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-08 15:51:13.624809 | orchestrator | Wednesday 08 October 2025 15:51:13 +0000 (0:00:00.213) 0:03:09.460 ***** 2025-10-08 15:51:13.624819 | orchestrator | =============================================================================== 2025-10-08 15:51:13.624829 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 41.32s 2025-10-08 15:51:13.624838 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 34.88s 2025-10-08 15:51:13.624848 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 17.51s 2025-10-08 15:51:13.624858 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.97s 2025-10-08 15:51:13.624868 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 10.49s 2025-10-08 15:51:13.624877 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 8.37s 2025-10-08 15:51:13.624893 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 4.99s 2025-10-08 15:51:13.624903 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 4.31s 2025-10-08 15:51:13.624913 | orchestrator | mariadb : Copying over config.json files for services ------------------- 3.78s 2025-10-08 15:51:13.624922 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 3.73s 2025-10-08 15:51:13.624932 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 3.45s 2025-10-08 15:51:13.624941 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.05s 2025-10-08 15:51:13.624951 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 2.86s 2025-10-08 15:51:13.624961 | orchestrator | Check MariaDB service --------------------------------------------------- 2.84s 2025-10-08 15:51:13.624970 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 2.84s 2025-10-08 15:51:13.624980 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 2.82s 2025-10-08 15:51:13.624989 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.79s 2025-10-08 15:51:13.624999 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.55s 2025-10-08 15:51:13.625009 | orchestrator | mariadb : Creating database backup user and setting permissions --------- 2.25s 2025-10-08 15:51:13.625019 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.25s 2025-10-08 15:51:16.665051 | orchestrator | 2025-10-08 15:51:16 | INFO  | Task 5d4969ba-4cce-4e08-aff4-0a4ef3c3fd4c is in state STARTED 2025-10-08 15:51:16.668042 | orchestrator | 2025-10-08 15:51:16 | INFO  | Task 5c850dda-c821-4643-b1aa-fc211c3021bc is in state STARTED 2025-10-08 15:51:16.670534 | orchestrator | 2025-10-08 15:51:16 | INFO  | Task 3e7cc80d-12b3-4174-8468-e3f41b069550 is in state STARTED 2025-10-08 15:51:16.670918 | orchestrator | 2025-10-08 15:51:16 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:51:19.715277 | orchestrator | 2025-10-08 15:51:19 | INFO  | Task 5d4969ba-4cce-4e08-aff4-0a4ef3c3fd4c is in state STARTED 2025-10-08 15:51:19.716503 | orchestrator | 2025-10-08 15:51:19 | INFO  | Task 5c850dda-c821-4643-b1aa-fc211c3021bc is in state STARTED 2025-10-08 15:51:19.718015 | orchestrator | 2025-10-08 15:51:19 | INFO  | Task 3e7cc80d-12b3-4174-8468-e3f41b069550 is in state STARTED 2025-10-08 15:51:19.718310 | orchestrator | 2025-10-08 15:51:19 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:51:22.767514 | orchestrator | 2025-10-08 15:51:22 | INFO  | Task 5d4969ba-4cce-4e08-aff4-0a4ef3c3fd4c is in state STARTED 2025-10-08 15:51:22.770933 | orchestrator | 2025-10-08 15:51:22 | INFO  | Task 5c850dda-c821-4643-b1aa-fc211c3021bc is in state STARTED 2025-10-08 15:51:22.772559 | orchestrator | 2025-10-08 15:51:22 | INFO  | Task 3e7cc80d-12b3-4174-8468-e3f41b069550 is in state STARTED 2025-10-08 15:51:22.772860 | orchestrator | 2025-10-08 15:51:22 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:51:25.817479 | orchestrator | 2025-10-08 15:51:25 | INFO  | Task 5d4969ba-4cce-4e08-aff4-0a4ef3c3fd4c is in state STARTED 2025-10-08 15:51:25.819321 | orchestrator | 2025-10-08 15:51:25 | INFO  | Task 5c850dda-c821-4643-b1aa-fc211c3021bc is in state STARTED 2025-10-08 15:51:25.821460 | orchestrator | 2025-10-08 15:51:25 | INFO  | Task 3e7cc80d-12b3-4174-8468-e3f41b069550 is in state STARTED 2025-10-08 15:51:25.821564 | orchestrator | 2025-10-08 15:51:25 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:51:28.867291 | orchestrator | 2025-10-08 15:51:28 | INFO  | Task 5d4969ba-4cce-4e08-aff4-0a4ef3c3fd4c is in state STARTED 2025-10-08 15:51:28.868319 | orchestrator | 2025-10-08 15:51:28 | INFO  | Task 5c850dda-c821-4643-b1aa-fc211c3021bc is in state STARTED 2025-10-08 15:51:28.869533 | orchestrator | 2025-10-08 15:51:28 | INFO  | Task 3e7cc80d-12b3-4174-8468-e3f41b069550 is in state STARTED 2025-10-08 15:51:28.870115 | orchestrator | 2025-10-08 15:51:28 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:51:31.915110 | orchestrator | 2025-10-08 15:51:31 | INFO  | Task 5d4969ba-4cce-4e08-aff4-0a4ef3c3fd4c is in state STARTED 2025-10-08 15:51:31.915741 | orchestrator | 2025-10-08 15:51:31 | INFO  | Task 5c850dda-c821-4643-b1aa-fc211c3021bc is in state STARTED 2025-10-08 15:51:31.917049 | orchestrator | 2025-10-08 15:51:31 | INFO  | Task 3e7cc80d-12b3-4174-8468-e3f41b069550 is in state STARTED 2025-10-08 15:51:31.917129 | orchestrator | 2025-10-08 15:51:31 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:51:34.970080 | orchestrator | 2025-10-08 15:51:34 | INFO  | Task 5d4969ba-4cce-4e08-aff4-0a4ef3c3fd4c is in state STARTED 2025-10-08 15:51:34.971531 | orchestrator | 2025-10-08 15:51:34 | INFO  | Task 5c850dda-c821-4643-b1aa-fc211c3021bc is in state STARTED 2025-10-08 15:51:34.972275 | orchestrator | 2025-10-08 15:51:34 | INFO  | Task 3e7cc80d-12b3-4174-8468-e3f41b069550 is in state STARTED 2025-10-08 15:51:34.972355 | orchestrator | 2025-10-08 15:51:34 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:51:38.004783 | orchestrator | 2025-10-08 15:51:38 | INFO  | Task 5d4969ba-4cce-4e08-aff4-0a4ef3c3fd4c is in state STARTED 2025-10-08 15:51:38.004884 | orchestrator | 2025-10-08 15:51:38 | INFO  | Task 5c850dda-c821-4643-b1aa-fc211c3021bc is in state STARTED 2025-10-08 15:51:38.005745 | orchestrator | 2025-10-08 15:51:38 | INFO  | Task 3e7cc80d-12b3-4174-8468-e3f41b069550 is in state STARTED 2025-10-08 15:51:38.005774 | orchestrator | 2025-10-08 15:51:38 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:51:41.038649 | orchestrator | 2025-10-08 15:51:41 | INFO  | Task 5d4969ba-4cce-4e08-aff4-0a4ef3c3fd4c is in state STARTED 2025-10-08 15:51:41.040451 | orchestrator | 2025-10-08 15:51:41 | INFO  | Task 5c850dda-c821-4643-b1aa-fc211c3021bc is in state STARTED 2025-10-08 15:51:41.041206 | orchestrator | 2025-10-08 15:51:41 | INFO  | Task 3e7cc80d-12b3-4174-8468-e3f41b069550 is in state STARTED 2025-10-08 15:51:41.041233 | orchestrator | 2025-10-08 15:51:41 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:51:44.079362 | orchestrator | 2025-10-08 15:51:44 | INFO  | Task 5d4969ba-4cce-4e08-aff4-0a4ef3c3fd4c is in state STARTED 2025-10-08 15:51:44.081812 | orchestrator | 2025-10-08 15:51:44 | INFO  | Task 5c850dda-c821-4643-b1aa-fc211c3021bc is in state STARTED 2025-10-08 15:51:44.083808 | orchestrator | 2025-10-08 15:51:44 | INFO  | Task 3e7cc80d-12b3-4174-8468-e3f41b069550 is in state STARTED 2025-10-08 15:51:44.083859 | orchestrator | 2025-10-08 15:51:44 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:51:47.122683 | orchestrator | 2025-10-08 15:51:47 | INFO  | Task 5d4969ba-4cce-4e08-aff4-0a4ef3c3fd4c is in state STARTED 2025-10-08 15:51:47.122783 | orchestrator | 2025-10-08 15:51:47 | INFO  | Task 5c850dda-c821-4643-b1aa-fc211c3021bc is in state STARTED 2025-10-08 15:51:47.123349 | orchestrator | 2025-10-08 15:51:47 | INFO  | Task 3e7cc80d-12b3-4174-8468-e3f41b069550 is in state STARTED 2025-10-08 15:51:47.123377 | orchestrator | 2025-10-08 15:51:47 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:51:50.156552 | orchestrator | 2025-10-08 15:51:50 | INFO  | Task 5d4969ba-4cce-4e08-aff4-0a4ef3c3fd4c is in state STARTED 2025-10-08 15:51:50.156784 | orchestrator | 2025-10-08 15:51:50 | INFO  | Task 5c850dda-c821-4643-b1aa-fc211c3021bc is in state STARTED 2025-10-08 15:51:50.157288 | orchestrator | 2025-10-08 15:51:50 | INFO  | Task 3e7cc80d-12b3-4174-8468-e3f41b069550 is in state STARTED 2025-10-08 15:51:50.157311 | orchestrator | 2025-10-08 15:51:50 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:51:53.196116 | orchestrator | 2025-10-08 15:51:53 | INFO  | Task 5d4969ba-4cce-4e08-aff4-0a4ef3c3fd4c is in state STARTED 2025-10-08 15:51:53.197413 | orchestrator | 2025-10-08 15:51:53 | INFO  | Task 5c850dda-c821-4643-b1aa-fc211c3021bc is in state STARTED 2025-10-08 15:51:53.199086 | orchestrator | 2025-10-08 15:51:53 | INFO  | Task 3e7cc80d-12b3-4174-8468-e3f41b069550 is in state STARTED 2025-10-08 15:51:53.199135 | orchestrator | 2025-10-08 15:51:53 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:51:56.237656 | orchestrator | 2025-10-08 15:51:56 | INFO  | Task 5d4969ba-4cce-4e08-aff4-0a4ef3c3fd4c is in state STARTED 2025-10-08 15:51:56.240715 | orchestrator | 2025-10-08 15:51:56 | INFO  | Task 5c850dda-c821-4643-b1aa-fc211c3021bc is in state STARTED 2025-10-08 15:51:56.243107 | orchestrator | 2025-10-08 15:51:56 | INFO  | Task 3e7cc80d-12b3-4174-8468-e3f41b069550 is in state STARTED 2025-10-08 15:51:56.243139 | orchestrator | 2025-10-08 15:51:56 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:51:59.285848 | orchestrator | 2025-10-08 15:51:59 | INFO  | Task 5d4969ba-4cce-4e08-aff4-0a4ef3c3fd4c is in state STARTED 2025-10-08 15:51:59.286990 | orchestrator | 2025-10-08 15:51:59 | INFO  | Task 5c850dda-c821-4643-b1aa-fc211c3021bc is in state STARTED 2025-10-08 15:51:59.287965 | orchestrator | 2025-10-08 15:51:59 | INFO  | Task 3e7cc80d-12b3-4174-8468-e3f41b069550 is in state STARTED 2025-10-08 15:51:59.287991 | orchestrator | 2025-10-08 15:51:59 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:52:02.328260 | orchestrator | 2025-10-08 15:52:02 | INFO  | Task 5d4969ba-4cce-4e08-aff4-0a4ef3c3fd4c is in state STARTED 2025-10-08 15:52:02.330405 | orchestrator | 2025-10-08 15:52:02 | INFO  | Task 5c850dda-c821-4643-b1aa-fc211c3021bc is in state STARTED 2025-10-08 15:52:02.331885 | orchestrator | 2025-10-08 15:52:02 | INFO  | Task 3e7cc80d-12b3-4174-8468-e3f41b069550 is in state STARTED 2025-10-08 15:52:02.331917 | orchestrator | 2025-10-08 15:52:02 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:52:05.384661 | orchestrator | 2025-10-08 15:52:05 | INFO  | Task bf1e5c07-dc60-4243-ac52-5a11f5c06179 is in state STARTED 2025-10-08 15:52:05.386216 | orchestrator | 2025-10-08 15:52:05 | INFO  | Task 5d4969ba-4cce-4e08-aff4-0a4ef3c3fd4c is in state STARTED 2025-10-08 15:52:05.391591 | orchestrator | 2025-10-08 15:52:05 | INFO  | Task 5c850dda-c821-4643-b1aa-fc211c3021bc is in state SUCCESS 2025-10-08 15:52:05.392018 | orchestrator | 2025-10-08 15:52:05.393500 | orchestrator | 2025-10-08 15:52:05.393529 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2025-10-08 15:52:05.393542 | orchestrator | 2025-10-08 15:52:05.393553 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-10-08 15:52:05.393565 | orchestrator | Wednesday 08 October 2025 15:49:53 +0000 (0:00:00.615) 0:00:00.615 ***** 2025-10-08 15:52:05.393699 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-10-08 15:52:05.393712 | orchestrator | 2025-10-08 15:52:05.393723 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-10-08 15:52:05.393734 | orchestrator | Wednesday 08 October 2025 15:49:54 +0000 (0:00:00.624) 0:00:01.239 ***** 2025-10-08 15:52:05.393745 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:52:05.393757 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:52:05.393768 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:52:05.393779 | orchestrator | 2025-10-08 15:52:05.394564 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-10-08 15:52:05.394609 | orchestrator | Wednesday 08 October 2025 15:49:55 +0000 (0:00:00.681) 0:00:01.920 ***** 2025-10-08 15:52:05.394621 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:52:05.394632 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:52:05.394643 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:52:05.394653 | orchestrator | 2025-10-08 15:52:05.394665 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-10-08 15:52:05.394676 | orchestrator | Wednesday 08 October 2025 15:49:55 +0000 (0:00:00.302) 0:00:02.223 ***** 2025-10-08 15:52:05.394746 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:52:05.394762 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:52:05.394773 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:52:05.394784 | orchestrator | 2025-10-08 15:52:05.394795 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-10-08 15:52:05.394823 | orchestrator | Wednesday 08 October 2025 15:49:56 +0000 (0:00:00.832) 0:00:03.055 ***** 2025-10-08 15:52:05.394834 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:52:05.394845 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:52:05.394856 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:52:05.394867 | orchestrator | 2025-10-08 15:52:05.394879 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-10-08 15:52:05.394890 | orchestrator | Wednesday 08 October 2025 15:49:56 +0000 (0:00:00.323) 0:00:03.379 ***** 2025-10-08 15:52:05.394902 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:52:05.394913 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:52:05.394924 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:52:05.394935 | orchestrator | 2025-10-08 15:52:05.394946 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-10-08 15:52:05.394957 | orchestrator | Wednesday 08 October 2025 15:49:56 +0000 (0:00:00.296) 0:00:03.676 ***** 2025-10-08 15:52:05.394968 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:52:05.394979 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:52:05.394990 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:52:05.395001 | orchestrator | 2025-10-08 15:52:05.395013 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-10-08 15:52:05.395024 | orchestrator | Wednesday 08 October 2025 15:49:57 +0000 (0:00:00.318) 0:00:03.994 ***** 2025-10-08 15:52:05.395036 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:52:05.395048 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:52:05.395059 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:52:05.395070 | orchestrator | 2025-10-08 15:52:05.395082 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-10-08 15:52:05.395093 | orchestrator | Wednesday 08 October 2025 15:49:57 +0000 (0:00:00.524) 0:00:04.519 ***** 2025-10-08 15:52:05.395120 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:52:05.395132 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:52:05.395143 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:52:05.395154 | orchestrator | 2025-10-08 15:52:05.395166 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-10-08 15:52:05.395204 | orchestrator | Wednesday 08 October 2025 15:49:57 +0000 (0:00:00.303) 0:00:04.823 ***** 2025-10-08 15:52:05.395215 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-10-08 15:52:05.395226 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-10-08 15:52:05.395236 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-10-08 15:52:05.395247 | orchestrator | 2025-10-08 15:52:05.395258 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-10-08 15:52:05.395269 | orchestrator | Wednesday 08 October 2025 15:49:58 +0000 (0:00:00.674) 0:00:05.497 ***** 2025-10-08 15:52:05.395279 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:52:05.395290 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:52:05.395301 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:52:05.395312 | orchestrator | 2025-10-08 15:52:05.395322 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-10-08 15:52:05.395333 | orchestrator | Wednesday 08 October 2025 15:49:59 +0000 (0:00:00.414) 0:00:05.912 ***** 2025-10-08 15:52:05.395344 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-10-08 15:52:05.395355 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-10-08 15:52:05.395365 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-10-08 15:52:05.395376 | orchestrator | 2025-10-08 15:52:05.395388 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-10-08 15:52:05.395402 | orchestrator | Wednesday 08 October 2025 15:50:01 +0000 (0:00:02.195) 0:00:08.108 ***** 2025-10-08 15:52:05.395414 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-10-08 15:52:05.395427 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-10-08 15:52:05.395439 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-10-08 15:52:05.395451 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:52:05.395463 | orchestrator | 2025-10-08 15:52:05.395475 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-10-08 15:52:05.395532 | orchestrator | Wednesday 08 October 2025 15:50:01 +0000 (0:00:00.641) 0:00:08.749 ***** 2025-10-08 15:52:05.395549 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-10-08 15:52:05.395564 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-10-08 15:52:05.395584 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-10-08 15:52:05.395597 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:52:05.395610 | orchestrator | 2025-10-08 15:52:05.395622 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-10-08 15:52:05.395634 | orchestrator | Wednesday 08 October 2025 15:50:02 +0000 (0:00:00.799) 0:00:09.549 ***** 2025-10-08 15:52:05.395648 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-10-08 15:52:05.395674 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-10-08 15:52:05.395687 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-10-08 15:52:05.395699 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:52:05.395711 | orchestrator | 2025-10-08 15:52:05.395723 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-10-08 15:52:05.395735 | orchestrator | Wednesday 08 October 2025 15:50:03 +0000 (0:00:00.348) 0:00:09.897 ***** 2025-10-08 15:52:05.395749 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'fae21743a66f', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-10-08 15:49:59.701258', 'end': '2025-10-08 15:49:59.737564', 'delta': '0:00:00.036306', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['fae21743a66f'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2025-10-08 15:52:05.395764 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '78d9c8db1380', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-10-08 15:50:00.482786', 'end': '2025-10-08 15:50:00.521752', 'delta': '0:00:00.038966', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['78d9c8db1380'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2025-10-08 15:52:05.395808 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'a01f6c2a66ce', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-10-08 15:50:01.056161', 'end': '2025-10-08 15:50:01.091638', 'delta': '0:00:00.035477', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['a01f6c2a66ce'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2025-10-08 15:52:05.395821 | orchestrator | 2025-10-08 15:52:05.395832 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-10-08 15:52:05.395848 | orchestrator | Wednesday 08 October 2025 15:50:03 +0000 (0:00:00.197) 0:00:10.095 ***** 2025-10-08 15:52:05.395859 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:52:05.395878 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:52:05.395889 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:52:05.395900 | orchestrator | 2025-10-08 15:52:05.395911 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-10-08 15:52:05.395921 | orchestrator | Wednesday 08 October 2025 15:50:03 +0000 (0:00:00.459) 0:00:10.554 ***** 2025-10-08 15:52:05.395932 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2025-10-08 15:52:05.395943 | orchestrator | 2025-10-08 15:52:05.395954 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-10-08 15:52:05.395965 | orchestrator | Wednesday 08 October 2025 15:50:05 +0000 (0:00:01.759) 0:00:12.314 ***** 2025-10-08 15:52:05.395976 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:52:05.395987 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:52:05.395998 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:52:05.396009 | orchestrator | 2025-10-08 15:52:05.396020 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-10-08 15:52:05.396031 | orchestrator | Wednesday 08 October 2025 15:50:05 +0000 (0:00:00.308) 0:00:12.623 ***** 2025-10-08 15:52:05.396042 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:52:05.396053 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:52:05.396063 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:52:05.396074 | orchestrator | 2025-10-08 15:52:05.396085 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-10-08 15:52:05.396096 | orchestrator | Wednesday 08 October 2025 15:50:06 +0000 (0:00:00.401) 0:00:13.025 ***** 2025-10-08 15:52:05.396107 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:52:05.396118 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:52:05.396129 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:52:05.396140 | orchestrator | 2025-10-08 15:52:05.396151 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-10-08 15:52:05.396161 | orchestrator | Wednesday 08 October 2025 15:50:06 +0000 (0:00:00.541) 0:00:13.566 ***** 2025-10-08 15:52:05.396220 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:52:05.396233 | orchestrator | 2025-10-08 15:52:05.396244 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-10-08 15:52:05.396255 | orchestrator | Wednesday 08 October 2025 15:50:06 +0000 (0:00:00.139) 0:00:13.706 ***** 2025-10-08 15:52:05.396265 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:52:05.396276 | orchestrator | 2025-10-08 15:52:05.396288 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-10-08 15:52:05.396299 | orchestrator | Wednesday 08 October 2025 15:50:07 +0000 (0:00:00.225) 0:00:13.931 ***** 2025-10-08 15:52:05.396309 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:52:05.396320 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:52:05.396331 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:52:05.396342 | orchestrator | 2025-10-08 15:52:05.396353 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-10-08 15:52:05.396364 | orchestrator | Wednesday 08 October 2025 15:50:07 +0000 (0:00:00.294) 0:00:14.225 ***** 2025-10-08 15:52:05.396375 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:52:05.396385 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:52:05.396396 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:52:05.396407 | orchestrator | 2025-10-08 15:52:05.396418 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-10-08 15:52:05.396429 | orchestrator | Wednesday 08 October 2025 15:50:07 +0000 (0:00:00.316) 0:00:14.542 ***** 2025-10-08 15:52:05.396439 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:52:05.396450 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:52:05.396461 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:52:05.396472 | orchestrator | 2025-10-08 15:52:05.396483 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-10-08 15:52:05.396494 | orchestrator | Wednesday 08 October 2025 15:50:08 +0000 (0:00:00.499) 0:00:15.042 ***** 2025-10-08 15:52:05.396504 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:52:05.396524 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:52:05.396534 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:52:05.396545 | orchestrator | 2025-10-08 15:52:05.396556 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-10-08 15:52:05.396567 | orchestrator | Wednesday 08 October 2025 15:50:08 +0000 (0:00:00.362) 0:00:15.404 ***** 2025-10-08 15:52:05.396578 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:52:05.396589 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:52:05.396599 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:52:05.396610 | orchestrator | 2025-10-08 15:52:05.396621 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-10-08 15:52:05.396632 | orchestrator | Wednesday 08 October 2025 15:50:08 +0000 (0:00:00.320) 0:00:15.724 ***** 2025-10-08 15:52:05.396643 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:52:05.396654 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:52:05.396665 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:52:05.396676 | orchestrator | 2025-10-08 15:52:05.396687 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-10-08 15:52:05.396732 | orchestrator | Wednesday 08 October 2025 15:50:09 +0000 (0:00:00.386) 0:00:16.110 ***** 2025-10-08 15:52:05.396745 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:52:05.396756 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:52:05.396767 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:52:05.396778 | orchestrator | 2025-10-08 15:52:05.396788 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-10-08 15:52:05.396799 | orchestrator | Wednesday 08 October 2025 15:50:09 +0000 (0:00:00.633) 0:00:16.744 ***** 2025-10-08 15:52:05.396816 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--25f30e7b--7b9e--5d46--b3fc--d4cb59f24626-osd--block--25f30e7b--7b9e--5d46--b3fc--d4cb59f24626', 'dm-uuid-LVM-IwX6ZkXLUCl0YcA4BzLjokZDOeJv2HrfYybBcJHxwkas2gpDO9dJKVm8PTbnaZDM'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-10-08 15:52:05.396829 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ff85ad2a--1d5d--50f9--b3a7--2f1eee54f485-osd--block--ff85ad2a--1d5d--50f9--b3a7--2f1eee54f485', 'dm-uuid-LVM-ed9o0GNO7PQg5svVWsXAoj031P8dkr3TFUwcML7pXDRFpwBAi01fbqUdVpwW93hA'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-10-08 15:52:05.396841 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-08 15:52:05.396853 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-08 15:52:05.396865 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-08 15:52:05.396887 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-08 15:52:05.396898 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-08 15:52:05.396939 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-08 15:52:05.396952 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-08 15:52:05.396969 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-08 15:52:05.396984 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d5106b6d-a2b1-46e7-8a70-2ffa10fa4fd8', 'scsi-SQEMU_QEMU_HARDDISK_d5106b6d-a2b1-46e7-8a70-2ffa10fa4fd8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d5106b6d-a2b1-46e7-8a70-2ffa10fa4fd8-part1', 'scsi-SQEMU_QEMU_HARDDISK_d5106b6d-a2b1-46e7-8a70-2ffa10fa4fd8-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d5106b6d-a2b1-46e7-8a70-2ffa10fa4fd8-part14', 'scsi-SQEMU_QEMU_HARDDISK_d5106b6d-a2b1-46e7-8a70-2ffa10fa4fd8-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d5106b6d-a2b1-46e7-8a70-2ffa10fa4fd8-part15', 'scsi-SQEMU_QEMU_HARDDISK_d5106b6d-a2b1-46e7-8a70-2ffa10fa4fd8-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d5106b6d-a2b1-46e7-8a70-2ffa10fa4fd8-part16', 'scsi-SQEMU_QEMU_HARDDISK_d5106b6d-a2b1-46e7-8a70-2ffa10fa4fd8-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-10-08 15:52:05.397007 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7ac75f6e--526f--52f0--b624--7532d6099aef-osd--block--7ac75f6e--526f--52f0--b624--7532d6099aef', 'dm-uuid-LVM-lj2Vpg6qcUbLutvAn92lW9fRMiCop0a96nZpb0XQFL6FwSAuZUWe4yMqwLGh1MzJ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-10-08 15:52:05.397048 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--25f30e7b--7b9e--5d46--b3fc--d4cb59f24626-osd--block--25f30e7b--7b9e--5d46--b3fc--d4cb59f24626'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-QF2svk-J06h-RNzj-e4X5-ESi8-uVgE-VzL6nT', 'scsi-0QEMU_QEMU_HARDDISK_e7b164cd-18c4-4443-b153-66ef822cc182', 'scsi-SQEMU_QEMU_HARDDISK_e7b164cd-18c4-4443-b153-66ef822cc182'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-10-08 15:52:05.397068 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--ff85ad2a--1d5d--50f9--b3a7--2f1eee54f485-osd--block--ff85ad2a--1d5d--50f9--b3a7--2f1eee54f485'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-RccNBU-9RDr-ipRC-Qiuy-lZ6U-9BDk-CYheJR', 'scsi-0QEMU_QEMU_HARDDISK_501046cd-0181-4267-8e39-455d7db25dff', 'scsi-SQEMU_QEMU_HARDDISK_501046cd-0181-4267-8e39-455d7db25dff'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-10-08 15:52:05.397080 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--bafbc9f1--844e--58d3--a294--acb7fdea1516-osd--block--bafbc9f1--844e--58d3--a294--acb7fdea1516', 'dm-uuid-LVM-3ZOGqWctD4o6vg0odPTHCuhke8CUDp1zHHUOc7hjGx9N4xgfXu78V9LfnkitzdkG'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-10-08 15:52:05.397091 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b2b0a7c4-684b-467f-bea3-a2180df0d298', 'scsi-SQEMU_QEMU_HARDDISK_b2b0a7c4-684b-467f-bea3-a2180df0d298'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-10-08 15:52:05.397112 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-08 15:52:05.397124 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-10-08-14-55-41-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-10-08 15:52:05.397135 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-08 15:52:05.397195 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-08 15:52:05.397210 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-08 15:52:05.397233 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-08 15:52:05.397246 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-08 15:52:05.397257 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-08 15:52:05.397269 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-08 15:52:05.397297 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a79e8257-b31c-4b26-9d3c-62ccc1082da3', 'scsi-SQEMU_QEMU_HARDDISK_a79e8257-b31c-4b26-9d3c-62ccc1082da3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a79e8257-b31c-4b26-9d3c-62ccc1082da3-part1', 'scsi-SQEMU_QEMU_HARDDISK_a79e8257-b31c-4b26-9d3c-62ccc1082da3-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a79e8257-b31c-4b26-9d3c-62ccc1082da3-part14', 'scsi-SQEMU_QEMU_HARDDISK_a79e8257-b31c-4b26-9d3c-62ccc1082da3-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a79e8257-b31c-4b26-9d3c-62ccc1082da3-part15', 'scsi-SQEMU_QEMU_HARDDISK_a79e8257-b31c-4b26-9d3c-62ccc1082da3-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a79e8257-b31c-4b26-9d3c-62ccc1082da3-part16', 'scsi-SQEMU_QEMU_HARDDISK_a79e8257-b31c-4b26-9d3c-62ccc1082da3-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-10-08 15:52:05.397317 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--7ac75f6e--526f--52f0--b624--7532d6099aef-osd--block--7ac75f6e--526f--52f0--b624--7532d6099aef'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-B6burn-l0HK-pmKM-ZLX8-pUWb-meyy-cLIfXf', 'scsi-0QEMU_QEMU_HARDDISK_8931d93f-304b-4b68-94eb-87cca6c6eade', 'scsi-SQEMU_QEMU_HARDDISK_8931d93f-304b-4b68-94eb-87cca6c6eade'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-10-08 15:52:05.397330 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--bafbc9f1--844e--58d3--a294--acb7fdea1516-osd--block--bafbc9f1--844e--58d3--a294--acb7fdea1516'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-M9WfaV-xFLb-hgB4-M0gh-vWdP-WQMT-J5KorF', 'scsi-0QEMU_QEMU_HARDDISK_f279d016-8061-4f1a-b5de-972c25793956', 'scsi-SQEMU_QEMU_HARDDISK_f279d016-8061-4f1a-b5de-972c25793956'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-10-08 15:52:05.397342 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5c2995ee-2a0f-4f5c-ac7c-066cefbff021', 'scsi-SQEMU_QEMU_HARDDISK_5c2995ee-2a0f-4f5c-ac7c-066cefbff021'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-10-08 15:52:05.397362 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-10-08-14-55-37-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-10-08 15:52:05.397374 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:52:05.397386 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:52:05.397398 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--93919d76--3b82--5996--a675--e75a55626347-osd--block--93919d76--3b82--5996--a675--e75a55626347', 'dm-uuid-LVM-sYScueFnQoEDbsAFWAMa6spsgAc8xeDuz9awT2ffDFq9jBwUbEXZdoMBKRGYttOs'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-10-08 15:52:05.397418 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--cead9db5--2c40--515a--bcee--782342d5bd60-osd--block--cead9db5--2c40--515a--bcee--782342d5bd60', 'dm-uuid-LVM-o1NfynOYwuMd33uDEG4GydJoD5Cdujl5dFhpNQiswlXX3LIRayJouByUEan5FOcP'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-10-08 15:52:05.397431 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-08 15:52:05.397447 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-08 15:52:05.397459 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-08 15:52:05.397471 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-08 15:52:05.397491 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-08 15:52:05.397503 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-08 15:52:05.397514 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-08 15:52:05.397526 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-08 15:52:05.397553 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1cc5cf69-2914-42bf-9b1b-88c775b3ec52', 'scsi-SQEMU_QEMU_HARDDISK_1cc5cf69-2914-42bf-9b1b-88c775b3ec52'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1cc5cf69-2914-42bf-9b1b-88c775b3ec52-part1', 'scsi-SQEMU_QEMU_HARDDISK_1cc5cf69-2914-42bf-9b1b-88c775b3ec52-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1cc5cf69-2914-42bf-9b1b-88c775b3ec52-part14', 'scsi-SQEMU_QEMU_HARDDISK_1cc5cf69-2914-42bf-9b1b-88c775b3ec52-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1cc5cf69-2914-42bf-9b1b-88c775b3ec52-part15', 'scsi-SQEMU_QEMU_HARDDISK_1cc5cf69-2914-42bf-9b1b-88c775b3ec52-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1cc5cf69-2914-42bf-9b1b-88c775b3ec52-part16', 'scsi-SQEMU_QEMU_HARDDISK_1cc5cf69-2914-42bf-9b1b-88c775b3ec52-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-10-08 15:52:05.397575 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--93919d76--3b82--5996--a675--e75a55626347-osd--block--93919d76--3b82--5996--a675--e75a55626347'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-cZsI5k-hEKk-uygP-Y5a1-Sval-EoXQ-6fgOoA', 'scsi-0QEMU_QEMU_HARDDISK_96ffedcc-0414-421a-b44e-b183e9db41fd', 'scsi-SQEMU_QEMU_HARDDISK_96ffedcc-0414-421a-b44e-b183e9db41fd'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-10-08 15:52:05.397588 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--cead9db5--2c40--515a--bcee--782342d5bd60-osd--block--cead9db5--2c40--515a--bcee--782342d5bd60'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-1gR1qk-cV0S-VAjr-plUs-5yns-7rtf-ve1FK3', 'scsi-0QEMU_QEMU_HARDDISK_d1c70491-aba4-4ff7-8b88-cbd07cfcddb1', 'scsi-SQEMU_QEMU_HARDDISK_d1c70491-aba4-4ff7-8b88-cbd07cfcddb1'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-10-08 15:52:05.397600 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1a05d404-282d-4245-ac59-6a85ac73ef0f', 'scsi-SQEMU_QEMU_HARDDISK_1a05d404-282d-4245-ac59-6a85ac73ef0f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-10-08 15:52:05.397619 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-10-08-14-55-39-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-10-08 15:52:05.397631 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:52:05.397642 | orchestrator | 2025-10-08 15:52:05.397654 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-10-08 15:52:05.397666 | orchestrator | Wednesday 08 October 2025 15:50:10 +0000 (0:00:00.667) 0:00:17.411 ***** 2025-10-08 15:52:05.397682 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--25f30e7b--7b9e--5d46--b3fc--d4cb59f24626-osd--block--25f30e7b--7b9e--5d46--b3fc--d4cb59f24626', 'dm-uuid-LVM-IwX6ZkXLUCl0YcA4BzLjokZDOeJv2HrfYybBcJHxwkas2gpDO9dJKVm8PTbnaZDM'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-08 15:52:05.397704 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ff85ad2a--1d5d--50f9--b3a7--2f1eee54f485-osd--block--ff85ad2a--1d5d--50f9--b3a7--2f1eee54f485', 'dm-uuid-LVM-ed9o0GNO7PQg5svVWsXAoj031P8dkr3TFUwcML7pXDRFpwBAi01fbqUdVpwW93hA'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-08 15:52:05.397717 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-08 15:52:05.397729 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-08 15:52:05.397742 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-08 15:52:05.397761 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-08 15:52:05.397778 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-08 15:52:05.397790 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-08 15:52:05.397810 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-08 15:52:05.397822 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-08 15:52:05.397848 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d5106b6d-a2b1-46e7-8a70-2ffa10fa4fd8', 'scsi-SQEMU_QEMU_HARDDISK_d5106b6d-a2b1-46e7-8a70-2ffa10fa4fd8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d5106b6d-a2b1-46e7-8a70-2ffa10fa4fd8-part1', 'scsi-SQEMU_QEMU_HARDDISK_d5106b6d-a2b1-46e7-8a70-2ffa10fa4fd8-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d5106b6d-a2b1-46e7-8a70-2ffa10fa4fd8-part14', 'scsi-SQEMU_QEMU_HARDDISK_d5106b6d-a2b1-46e7-8a70-2ffa10fa4fd8-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d5106b6d-a2b1-46e7-8a70-2ffa10fa4fd8-part15', 'scsi-SQEMU_QEMU_HARDDISK_d5106b6d-a2b1-46e7-8a70-2ffa10fa4fd8-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d5106b6d-a2b1-46e7-8a70-2ffa10fa4fd8-part16', 'scsi-SQEMU_QEMU_HARDDISK_d5106b6d-a2b1-46e7-8a70-2ffa10fa4fd8-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-08 15:52:05.397870 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--25f30e7b--7b9e--5d46--b3fc--d4cb59f24626-osd--block--25f30e7b--7b9e--5d46--b3fc--d4cb59f24626'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-QF2svk-J06h-RNzj-e4X5-ESi8-uVgE-VzL6nT', 'scsi-0QEMU_QEMU_HARDDISK_e7b164cd-18c4-4443-b153-66ef822cc182', 'scsi-SQEMU_QEMU_HARDDISK_e7b164cd-18c4-4443-b153-66ef822cc182'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-08 15:52:05.397883 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7ac75f6e--526f--52f0--b624--7532d6099aef-osd--block--7ac75f6e--526f--52f0--b624--7532d6099aef', 'dm-uuid-LVM-lj2Vpg6qcUbLutvAn92lW9fRMiCop0a96nZpb0XQFL6FwSAuZUWe4yMqwLGh1MzJ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-08 15:52:05.397895 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--ff85ad2a--1d5d--50f9--b3a7--2f1eee54f485-osd--block--ff85ad2a--1d5d--50f9--b3a7--2f1eee54f485'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-RccNBU-9RDr-ipRC-Qiuy-lZ6U-9BDk-CYheJR', 'scsi-0QEMU_QEMU_HARDDISK_501046cd-0181-4267-8e39-455d7db25dff', 'scsi-SQEMU_QEMU_HARDDISK_501046cd-0181-4267-8e39-455d7db25dff'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-08 15:52:05.397915 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--bafbc9f1--844e--58d3--a294--acb7fdea1516-osd--block--bafbc9f1--844e--58d3--a294--acb7fdea1516', 'dm-uuid-LVM-3ZOGqWctD4o6vg0odPTHCuhke8CUDp1zHHUOc7hjGx9N4xgfXu78V9LfnkitzdkG'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-08 15:52:05.397933 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b2b0a7c4-684b-467f-bea3-a2180df0d298', 'scsi-SQEMU_QEMU_HARDDISK_b2b0a7c4-684b-467f-bea3-a2180df0d298'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-08 15:52:05.397955 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-08 15:52:05.397968 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-10-08-14-55-41-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-08 15:52:05.397980 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-08 15:52:05.397992 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-08 15:52:05.398013 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-08 15:52:05.398063 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-08 15:52:05.398082 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-08 15:52:05.398093 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:52:05.398105 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-08 15:52:05.398117 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-08 15:52:05.398143 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a79e8257-b31c-4b26-9d3c-62ccc1082da3', 'scsi-SQEMU_QEMU_HARDDISK_a79e8257-b31c-4b26-9d3c-62ccc1082da3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a79e8257-b31c-4b26-9d3c-62ccc1082da3-part1', 'scsi-SQEMU_QEMU_HARDDISK_a79e8257-b31c-4b26-9d3c-62ccc1082da3-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a79e8257-b31c-4b26-9d3c-62ccc1082da3-part14', 'scsi-SQEMU_QEMU_HARDDISK_a79e8257-b31c-4b26-9d3c-62ccc1082da3-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a79e8257-b31c-4b26-9d3c-62ccc1082da3-part15', 'scsi-SQEMU_QEMU_HARDDISK_a79e8257-b31c-4b26-9d3c-62ccc1082da3-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a79e8257-b31c-4b26-9d3c-62ccc1082da3-part16', 'scsi-SQEMU_QEMU_HARDDISK_a79e8257-b31c-4b26-9d3c-62ccc1082da3-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-08 15:52:05.398166 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--7ac75f6e--526f--52f0--b624--7532d6099aef-osd--block--7ac75f6e--526f--52f0--b624--7532d6099aef'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-B6burn-l0HK-pmKM-ZLX8-pUWb-meyy-cLIfXf', 'scsi-0QEMU_QEMU_HARDDISK_8931d93f-304b-4b68-94eb-87cca6c6eade', 'scsi-SQEMU_QEMU_HARDDISK_8931d93f-304b-4b68-94eb-87cca6c6eade'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-08 15:52:05.398235 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--bafbc9f1--844e--58d3--a294--acb7fdea1516-osd--block--bafbc9f1--844e--58d3--a294--acb7fdea1516'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-M9WfaV-xFLb-hgB4-M0gh-vWdP-WQMT-J5KorF', 'scsi-0QEMU_QEMU_HARDDISK_f279d016-8061-4f1a-b5de-972c25793956', 'scsi-SQEMU_QEMU_HARDDISK_f279d016-8061-4f1a-b5de-972c25793956'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-08 15:52:05.398248 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5c2995ee-2a0f-4f5c-ac7c-066cefbff021', 'scsi-SQEMU_QEMU_HARDDISK_5c2995ee-2a0f-4f5c-ac7c-066cefbff021'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-08 15:52:05.398270 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-10-08-14-55-37-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-08 15:52:05.398291 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:52:05.398308 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--93919d76--3b82--5996--a675--e75a55626347-osd--block--93919d76--3b82--5996--a675--e75a55626347', 'dm-uuid-LVM-sYScueFnQoEDbsAFWAMa6spsgAc8xeDuz9awT2ffDFq9jBwUbEXZdoMBKRGYttOs'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-08 15:52:05.398320 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--cead9db5--2c40--515a--bcee--782342d5bd60-osd--block--cead9db5--2c40--515a--bcee--782342d5bd60', 'dm-uuid-LVM-o1NfynOYwuMd33uDEG4GydJoD5Cdujl5dFhpNQiswlXX3LIRayJouByUEan5FOcP'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-08 15:52:05.398333 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-08 15:52:05.398345 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-08 15:52:05.398357 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-08 15:52:05.398376 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-08 15:52:05.398401 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-08 15:52:05.398413 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-08 15:52:05.398426 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-08 15:52:05.398438 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-08 15:52:05.398463 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1cc5cf69-2914-42bf-9b1b-88c775b3ec52', 'scsi-SQEMU_QEMU_HARDDISK_1cc5cf69-2914-42bf-9b1b-88c775b3ec52'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1cc5cf69-2914-42bf-9b1b-88c775b3ec52-part1', 'scsi-SQEMU_QEMU_HARDDISK_1cc5cf69-2914-42bf-9b1b-88c775b3ec52-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1cc5cf69-2914-42bf-9b1b-88c775b3ec52-part14', 'scsi-SQEMU_QEMU_HARDDISK_1cc5cf69-2914-42bf-9b1b-88c775b3ec52-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1cc5cf69-2914-42bf-9b1b-88c775b3ec52-part15', 'scsi-SQEMU_QEMU_HARDDISK_1cc5cf69-2914-42bf-9b1b-88c775b3ec52-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1cc5cf69-2914-42bf-9b1b-88c775b3ec52-part16', 'scsi-SQEMU_QEMU_HARDDISK_1cc5cf69-2914-42bf-9b1b-88c775b3ec52-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-08 15:52:05.398484 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--93919d76--3b82--5996--a675--e75a55626347-osd--block--93919d76--3b82--5996--a675--e75a55626347'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-cZsI5k-hEKk-uygP-Y5a1-Sval-EoXQ-6fgOoA', 'scsi-0QEMU_QEMU_HARDDISK_96ffedcc-0414-421a-b44e-b183e9db41fd', 'scsi-SQEMU_QEMU_HARDDISK_96ffedcc-0414-421a-b44e-b183e9db41fd'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-08 15:52:05.398497 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--cead9db5--2c40--515a--bcee--782342d5bd60-osd--block--cead9db5--2c40--515a--bcee--782342d5bd60'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-1gR1qk-cV0S-VAjr-plUs-5yns-7rtf-ve1FK3', 'scsi-0QEMU_QEMU_HARDDISK_d1c70491-aba4-4ff7-8b88-cbd07cfcddb1', 'scsi-SQEMU_QEMU_HARDDISK_d1c70491-aba4-4ff7-8b88-cbd07cfcddb1'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-08 15:52:05.398510 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1a05d404-282d-4245-ac59-6a85ac73ef0f', 'scsi-SQEMU_QEMU_HARDDISK_1a05d404-282d-4245-ac59-6a85ac73ef0f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-08 15:52:05.398530 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-10-08-14-55-39-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-08 15:52:05.398549 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:52:05.398561 | orchestrator | 2025-10-08 15:52:05.398573 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-10-08 15:52:05.398585 | orchestrator | Wednesday 08 October 2025 15:50:11 +0000 (0:00:00.669) 0:00:18.081 ***** 2025-10-08 15:52:05.398597 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:52:05.398608 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:52:05.398619 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:52:05.398631 | orchestrator | 2025-10-08 15:52:05.398642 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-10-08 15:52:05.398653 | orchestrator | Wednesday 08 October 2025 15:50:11 +0000 (0:00:00.694) 0:00:18.775 ***** 2025-10-08 15:52:05.398669 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:52:05.398681 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:52:05.398692 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:52:05.398703 | orchestrator | 2025-10-08 15:52:05.398714 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-10-08 15:52:05.398726 | orchestrator | Wednesday 08 October 2025 15:50:12 +0000 (0:00:00.538) 0:00:19.313 ***** 2025-10-08 15:52:05.398737 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:52:05.398748 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:52:05.398759 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:52:05.398769 | orchestrator | 2025-10-08 15:52:05.398779 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-10-08 15:52:05.398789 | orchestrator | Wednesday 08 October 2025 15:50:13 +0000 (0:00:00.703) 0:00:20.016 ***** 2025-10-08 15:52:05.398799 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:52:05.398809 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:52:05.398819 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:52:05.398829 | orchestrator | 2025-10-08 15:52:05.398839 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-10-08 15:52:05.398849 | orchestrator | Wednesday 08 October 2025 15:50:13 +0000 (0:00:00.310) 0:00:20.327 ***** 2025-10-08 15:52:05.398859 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:52:05.398869 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:52:05.398879 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:52:05.398889 | orchestrator | 2025-10-08 15:52:05.398899 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-10-08 15:52:05.398909 | orchestrator | Wednesday 08 October 2025 15:50:13 +0000 (0:00:00.414) 0:00:20.741 ***** 2025-10-08 15:52:05.398919 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:52:05.398928 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:52:05.398938 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:52:05.398948 | orchestrator | 2025-10-08 15:52:05.398958 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-10-08 15:52:05.398968 | orchestrator | Wednesday 08 October 2025 15:50:14 +0000 (0:00:00.528) 0:00:21.270 ***** 2025-10-08 15:52:05.398978 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-10-08 15:52:05.398988 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-10-08 15:52:05.398998 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-10-08 15:52:05.399008 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-10-08 15:52:05.399018 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-10-08 15:52:05.399028 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-10-08 15:52:05.399037 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-10-08 15:52:05.399047 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-10-08 15:52:05.399063 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-10-08 15:52:05.399073 | orchestrator | 2025-10-08 15:52:05.399083 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-10-08 15:52:05.399093 | orchestrator | Wednesday 08 October 2025 15:50:15 +0000 (0:00:00.870) 0:00:22.140 ***** 2025-10-08 15:52:05.399103 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-10-08 15:52:05.399113 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-10-08 15:52:05.399123 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-10-08 15:52:05.399133 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:52:05.399143 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-10-08 15:52:05.399153 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-10-08 15:52:05.399163 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-10-08 15:52:05.399187 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:52:05.399198 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-10-08 15:52:05.399208 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-10-08 15:52:05.399218 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-10-08 15:52:05.399228 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:52:05.399238 | orchestrator | 2025-10-08 15:52:05.399248 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-10-08 15:52:05.399258 | orchestrator | Wednesday 08 October 2025 15:50:15 +0000 (0:00:00.443) 0:00:22.583 ***** 2025-10-08 15:52:05.399268 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-10-08 15:52:05.399279 | orchestrator | 2025-10-08 15:52:05.399289 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-10-08 15:52:05.399301 | orchestrator | Wednesday 08 October 2025 15:50:16 +0000 (0:00:00.764) 0:00:23.348 ***** 2025-10-08 15:52:05.399311 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:52:05.399321 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:52:05.399331 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:52:05.399340 | orchestrator | 2025-10-08 15:52:05.399355 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-10-08 15:52:05.399366 | orchestrator | Wednesday 08 October 2025 15:50:16 +0000 (0:00:00.326) 0:00:23.674 ***** 2025-10-08 15:52:05.399376 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:52:05.399386 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:52:05.399396 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:52:05.399406 | orchestrator | 2025-10-08 15:52:05.399416 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-10-08 15:52:05.399426 | orchestrator | Wednesday 08 October 2025 15:50:17 +0000 (0:00:00.321) 0:00:23.996 ***** 2025-10-08 15:52:05.399436 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:52:05.399446 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:52:05.399456 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:52:05.399466 | orchestrator | 2025-10-08 15:52:05.399476 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-10-08 15:52:05.399486 | orchestrator | Wednesday 08 October 2025 15:50:17 +0000 (0:00:00.317) 0:00:24.313 ***** 2025-10-08 15:52:05.399496 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:52:05.399510 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:52:05.399521 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:52:05.399531 | orchestrator | 2025-10-08 15:52:05.399541 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-10-08 15:52:05.399551 | orchestrator | Wednesday 08 October 2025 15:50:18 +0000 (0:00:00.661) 0:00:24.974 ***** 2025-10-08 15:52:05.399561 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-10-08 15:52:05.399571 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-10-08 15:52:05.399590 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-10-08 15:52:05.399600 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:52:05.399610 | orchestrator | 2025-10-08 15:52:05.399620 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-10-08 15:52:05.399631 | orchestrator | Wednesday 08 October 2025 15:50:18 +0000 (0:00:00.408) 0:00:25.383 ***** 2025-10-08 15:52:05.399641 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-10-08 15:52:05.399651 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-10-08 15:52:05.399661 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-10-08 15:52:05.399670 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:52:05.399680 | orchestrator | 2025-10-08 15:52:05.399691 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-10-08 15:52:05.399701 | orchestrator | Wednesday 08 October 2025 15:50:18 +0000 (0:00:00.403) 0:00:25.786 ***** 2025-10-08 15:52:05.399711 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-10-08 15:52:05.399721 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-10-08 15:52:05.399731 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-10-08 15:52:05.399741 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:52:05.399751 | orchestrator | 2025-10-08 15:52:05.399761 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-10-08 15:52:05.399771 | orchestrator | Wednesday 08 October 2025 15:50:19 +0000 (0:00:00.394) 0:00:26.181 ***** 2025-10-08 15:52:05.399781 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:52:05.399790 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:52:05.399800 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:52:05.399810 | orchestrator | 2025-10-08 15:52:05.399821 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-10-08 15:52:05.399831 | orchestrator | Wednesday 08 October 2025 15:50:19 +0000 (0:00:00.316) 0:00:26.497 ***** 2025-10-08 15:52:05.399841 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-10-08 15:52:05.399851 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-10-08 15:52:05.399861 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-10-08 15:52:05.399870 | orchestrator | 2025-10-08 15:52:05.399881 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-10-08 15:52:05.399891 | orchestrator | Wednesday 08 October 2025 15:50:20 +0000 (0:00:00.496) 0:00:26.994 ***** 2025-10-08 15:52:05.399901 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-10-08 15:52:05.399911 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-10-08 15:52:05.399921 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-10-08 15:52:05.399930 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-10-08 15:52:05.399940 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-10-08 15:52:05.399950 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-10-08 15:52:05.399960 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-10-08 15:52:05.399970 | orchestrator | 2025-10-08 15:52:05.399980 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-10-08 15:52:05.399990 | orchestrator | Wednesday 08 October 2025 15:50:21 +0000 (0:00:01.028) 0:00:28.022 ***** 2025-10-08 15:52:05.400000 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-10-08 15:52:05.400010 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-10-08 15:52:05.400020 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-10-08 15:52:05.400030 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-10-08 15:52:05.400040 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-10-08 15:52:05.400058 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-10-08 15:52:05.400068 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-10-08 15:52:05.400078 | orchestrator | 2025-10-08 15:52:05.400095 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2025-10-08 15:52:05.400105 | orchestrator | Wednesday 08 October 2025 15:50:23 +0000 (0:00:02.051) 0:00:30.074 ***** 2025-10-08 15:52:05.400115 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:52:05.400125 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:52:05.400135 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2025-10-08 15:52:05.400145 | orchestrator | 2025-10-08 15:52:05.400155 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2025-10-08 15:52:05.400164 | orchestrator | Wednesday 08 October 2025 15:50:23 +0000 (0:00:00.374) 0:00:30.448 ***** 2025-10-08 15:52:05.400188 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-10-08 15:52:05.400204 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-10-08 15:52:05.400214 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-10-08 15:52:05.400225 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-10-08 15:52:05.400236 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-10-08 15:52:05.400246 | orchestrator | 2025-10-08 15:52:05.400256 | orchestrator | TASK [generate keys] *********************************************************** 2025-10-08 15:52:05.400267 | orchestrator | Wednesday 08 October 2025 15:51:08 +0000 (0:00:45.372) 0:01:15.820 ***** 2025-10-08 15:52:05.400277 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-10-08 15:52:05.400287 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-10-08 15:52:05.400297 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-10-08 15:52:05.400307 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-10-08 15:52:05.400317 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-10-08 15:52:05.400328 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-10-08 15:52:05.400338 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2025-10-08 15:52:05.400348 | orchestrator | 2025-10-08 15:52:05.400358 | orchestrator | TASK [get keys from monitors] ************************************************** 2025-10-08 15:52:05.400368 | orchestrator | Wednesday 08 October 2025 15:51:33 +0000 (0:00:24.091) 0:01:39.911 ***** 2025-10-08 15:52:05.400378 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-10-08 15:52:05.400388 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-10-08 15:52:05.400405 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-10-08 15:52:05.400415 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-10-08 15:52:05.400425 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-10-08 15:52:05.400435 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-10-08 15:52:05.400445 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-10-08 15:52:05.400455 | orchestrator | 2025-10-08 15:52:05.400466 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2025-10-08 15:52:05.400476 | orchestrator | Wednesday 08 October 2025 15:51:44 +0000 (0:00:11.726) 0:01:51.638 ***** 2025-10-08 15:52:05.400486 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-10-08 15:52:05.400496 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-10-08 15:52:05.400506 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-10-08 15:52:05.400516 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-10-08 15:52:05.400526 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-10-08 15:52:05.400537 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-10-08 15:52:05.400553 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-10-08 15:52:05.400564 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-10-08 15:52:05.400573 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-10-08 15:52:05.400583 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-10-08 15:52:05.400593 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-10-08 15:52:05.400603 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-10-08 15:52:05.400613 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-10-08 15:52:05.400623 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-10-08 15:52:05.400633 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-10-08 15:52:05.400674 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-10-08 15:52:05.400686 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-10-08 15:52:05.400696 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-10-08 15:52:05.400706 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2025-10-08 15:52:05.400716 | orchestrator | 2025-10-08 15:52:05.400726 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-08 15:52:05.400736 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2025-10-08 15:52:05.400747 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-10-08 15:52:05.400758 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-10-08 15:52:05.400768 | orchestrator | 2025-10-08 15:52:05.400778 | orchestrator | 2025-10-08 15:52:05.400787 | orchestrator | 2025-10-08 15:52:05.400797 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-08 15:52:05.400807 | orchestrator | Wednesday 08 October 2025 15:52:02 +0000 (0:00:17.318) 0:02:08.956 ***** 2025-10-08 15:52:05.400817 | orchestrator | =============================================================================== 2025-10-08 15:52:05.400827 | orchestrator | create openstack pool(s) ----------------------------------------------- 45.37s 2025-10-08 15:52:05.400844 | orchestrator | generate keys ---------------------------------------------------------- 24.09s 2025-10-08 15:52:05.400854 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 17.32s 2025-10-08 15:52:05.400864 | orchestrator | get keys from monitors ------------------------------------------------- 11.73s 2025-10-08 15:52:05.400874 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.20s 2025-10-08 15:52:05.400884 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 2.05s 2025-10-08 15:52:05.400894 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.76s 2025-10-08 15:52:05.400903 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 1.03s 2025-10-08 15:52:05.400913 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.87s 2025-10-08 15:52:05.400923 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.83s 2025-10-08 15:52:05.400933 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.80s 2025-10-08 15:52:05.400943 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.76s 2025-10-08 15:52:05.400953 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.70s 2025-10-08 15:52:05.400963 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.69s 2025-10-08 15:52:05.400973 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.68s 2025-10-08 15:52:05.400982 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.67s 2025-10-08 15:52:05.400992 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.67s 2025-10-08 15:52:05.401002 | orchestrator | ceph-facts : Collect existed devices ------------------------------------ 0.67s 2025-10-08 15:52:05.401012 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.66s 2025-10-08 15:52:05.401022 | orchestrator | ceph-facts : Check for a ceph mon socket -------------------------------- 0.64s 2025-10-08 15:52:05.401032 | orchestrator | 2025-10-08 15:52:05 | INFO  | Task 3e7cc80d-12b3-4174-8468-e3f41b069550 is in state STARTED 2025-10-08 15:52:05.401042 | orchestrator | 2025-10-08 15:52:05 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:52:08.424446 | orchestrator | 2025-10-08 15:52:08 | INFO  | Task bf1e5c07-dc60-4243-ac52-5a11f5c06179 is in state STARTED 2025-10-08 15:52:08.426209 | orchestrator | 2025-10-08 15:52:08 | INFO  | Task 5d4969ba-4cce-4e08-aff4-0a4ef3c3fd4c is in state STARTED 2025-10-08 15:52:08.428469 | orchestrator | 2025-10-08 15:52:08 | INFO  | Task 3e7cc80d-12b3-4174-8468-e3f41b069550 is in state STARTED 2025-10-08 15:52:08.428493 | orchestrator | 2025-10-08 15:52:08 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:52:11.467118 | orchestrator | 2025-10-08 15:52:11 | INFO  | Task bf1e5c07-dc60-4243-ac52-5a11f5c06179 is in state STARTED 2025-10-08 15:52:11.468827 | orchestrator | 2025-10-08 15:52:11 | INFO  | Task 5d4969ba-4cce-4e08-aff4-0a4ef3c3fd4c is in state STARTED 2025-10-08 15:52:11.470390 | orchestrator | 2025-10-08 15:52:11 | INFO  | Task 3e7cc80d-12b3-4174-8468-e3f41b069550 is in state STARTED 2025-10-08 15:52:11.470418 | orchestrator | 2025-10-08 15:52:11 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:52:14.515989 | orchestrator | 2025-10-08 15:52:14 | INFO  | Task bf1e5c07-dc60-4243-ac52-5a11f5c06179 is in state STARTED 2025-10-08 15:52:14.518155 | orchestrator | 2025-10-08 15:52:14 | INFO  | Task 5d4969ba-4cce-4e08-aff4-0a4ef3c3fd4c is in state STARTED 2025-10-08 15:52:14.520421 | orchestrator | 2025-10-08 15:52:14 | INFO  | Task 3e7cc80d-12b3-4174-8468-e3f41b069550 is in state STARTED 2025-10-08 15:52:14.521101 | orchestrator | 2025-10-08 15:52:14 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:52:17.561127 | orchestrator | 2025-10-08 15:52:17 | INFO  | Task bf1e5c07-dc60-4243-ac52-5a11f5c06179 is in state STARTED 2025-10-08 15:52:17.562130 | orchestrator | 2025-10-08 15:52:17 | INFO  | Task 5d4969ba-4cce-4e08-aff4-0a4ef3c3fd4c is in state STARTED 2025-10-08 15:52:17.563837 | orchestrator | 2025-10-08 15:52:17 | INFO  | Task 3e7cc80d-12b3-4174-8468-e3f41b069550 is in state STARTED 2025-10-08 15:52:17.563864 | orchestrator | 2025-10-08 15:52:17 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:52:20.614746 | orchestrator | 2025-10-08 15:52:20 | INFO  | Task bf1e5c07-dc60-4243-ac52-5a11f5c06179 is in state STARTED 2025-10-08 15:52:20.615530 | orchestrator | 2025-10-08 15:52:20 | INFO  | Task 5d4969ba-4cce-4e08-aff4-0a4ef3c3fd4c is in state STARTED 2025-10-08 15:52:20.616474 | orchestrator | 2025-10-08 15:52:20 | INFO  | Task 3e7cc80d-12b3-4174-8468-e3f41b069550 is in state STARTED 2025-10-08 15:52:20.616498 | orchestrator | 2025-10-08 15:52:20 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:52:23.668798 | orchestrator | 2025-10-08 15:52:23 | INFO  | Task bf1e5c07-dc60-4243-ac52-5a11f5c06179 is in state STARTED 2025-10-08 15:52:23.671842 | orchestrator | 2025-10-08 15:52:23 | INFO  | Task 5d4969ba-4cce-4e08-aff4-0a4ef3c3fd4c is in state STARTED 2025-10-08 15:52:23.673941 | orchestrator | 2025-10-08 15:52:23 | INFO  | Task 3e7cc80d-12b3-4174-8468-e3f41b069550 is in state STARTED 2025-10-08 15:52:23.674406 | orchestrator | 2025-10-08 15:52:23 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:52:26.729820 | orchestrator | 2025-10-08 15:52:26 | INFO  | Task bf1e5c07-dc60-4243-ac52-5a11f5c06179 is in state STARTED 2025-10-08 15:52:26.731449 | orchestrator | 2025-10-08 15:52:26 | INFO  | Task 5d4969ba-4cce-4e08-aff4-0a4ef3c3fd4c is in state STARTED 2025-10-08 15:52:26.733065 | orchestrator | 2025-10-08 15:52:26 | INFO  | Task 3e7cc80d-12b3-4174-8468-e3f41b069550 is in state STARTED 2025-10-08 15:52:26.733285 | orchestrator | 2025-10-08 15:52:26 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:52:29.782232 | orchestrator | 2025-10-08 15:52:29 | INFO  | Task bf1e5c07-dc60-4243-ac52-5a11f5c06179 is in state STARTED 2025-10-08 15:52:29.783609 | orchestrator | 2025-10-08 15:52:29 | INFO  | Task 5d4969ba-4cce-4e08-aff4-0a4ef3c3fd4c is in state STARTED 2025-10-08 15:52:29.785227 | orchestrator | 2025-10-08 15:52:29 | INFO  | Task 3e7cc80d-12b3-4174-8468-e3f41b069550 is in state STARTED 2025-10-08 15:52:29.785253 | orchestrator | 2025-10-08 15:52:29 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:52:32.832340 | orchestrator | 2025-10-08 15:52:32 | INFO  | Task bf1e5c07-dc60-4243-ac52-5a11f5c06179 is in state STARTED 2025-10-08 15:52:32.832657 | orchestrator | 2025-10-08 15:52:32 | INFO  | Task 5d4969ba-4cce-4e08-aff4-0a4ef3c3fd4c is in state STARTED 2025-10-08 15:52:32.834618 | orchestrator | 2025-10-08 15:52:32 | INFO  | Task 3e7cc80d-12b3-4174-8468-e3f41b069550 is in state STARTED 2025-10-08 15:52:32.834644 | orchestrator | 2025-10-08 15:52:32 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:52:35.886008 | orchestrator | 2025-10-08 15:52:35 | INFO  | Task bf1e5c07-dc60-4243-ac52-5a11f5c06179 is in state STARTED 2025-10-08 15:52:35.888118 | orchestrator | 2025-10-08 15:52:35 | INFO  | Task 5d4969ba-4cce-4e08-aff4-0a4ef3c3fd4c is in state STARTED 2025-10-08 15:52:35.890273 | orchestrator | 2025-10-08 15:52:35 | INFO  | Task 3e7cc80d-12b3-4174-8468-e3f41b069550 is in state STARTED 2025-10-08 15:52:35.890302 | orchestrator | 2025-10-08 15:52:35 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:52:38.935293 | orchestrator | 2025-10-08 15:52:38 | INFO  | Task bf1e5c07-dc60-4243-ac52-5a11f5c06179 is in state STARTED 2025-10-08 15:52:38.936715 | orchestrator | 2025-10-08 15:52:38 | INFO  | Task 5d4969ba-4cce-4e08-aff4-0a4ef3c3fd4c is in state STARTED 2025-10-08 15:52:38.938544 | orchestrator | 2025-10-08 15:52:38 | INFO  | Task 3e7cc80d-12b3-4174-8468-e3f41b069550 is in state STARTED 2025-10-08 15:52:38.938576 | orchestrator | 2025-10-08 15:52:38 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:52:41.982708 | orchestrator | 2025-10-08 15:52:41 | INFO  | Task c9bc91f9-036b-4c95-94b6-61f7f7562450 is in state STARTED 2025-10-08 15:52:41.983546 | orchestrator | 2025-10-08 15:52:41 | INFO  | Task bf1e5c07-dc60-4243-ac52-5a11f5c06179 is in state SUCCESS 2025-10-08 15:52:41.986086 | orchestrator | 2025-10-08 15:52:41 | INFO  | Task 5d4969ba-4cce-4e08-aff4-0a4ef3c3fd4c is in state STARTED 2025-10-08 15:52:41.987458 | orchestrator | 2025-10-08 15:52:41 | INFO  | Task 3e7cc80d-12b3-4174-8468-e3f41b069550 is in state STARTED 2025-10-08 15:52:41.987681 | orchestrator | 2025-10-08 15:52:41 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:52:45.040592 | orchestrator | 2025-10-08 15:52:45 | INFO  | Task c9bc91f9-036b-4c95-94b6-61f7f7562450 is in state STARTED 2025-10-08 15:52:45.042863 | orchestrator | 2025-10-08 15:52:45 | INFO  | Task 5d4969ba-4cce-4e08-aff4-0a4ef3c3fd4c is in state STARTED 2025-10-08 15:52:45.044332 | orchestrator | 2025-10-08 15:52:45 | INFO  | Task 3e7cc80d-12b3-4174-8468-e3f41b069550 is in state STARTED 2025-10-08 15:52:45.044359 | orchestrator | 2025-10-08 15:52:45 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:52:48.095103 | orchestrator | 2025-10-08 15:52:48 | INFO  | Task c9bc91f9-036b-4c95-94b6-61f7f7562450 is in state STARTED 2025-10-08 15:52:48.098392 | orchestrator | 2025-10-08 15:52:48 | INFO  | Task 5d4969ba-4cce-4e08-aff4-0a4ef3c3fd4c is in state STARTED 2025-10-08 15:52:48.100401 | orchestrator | 2025-10-08 15:52:48 | INFO  | Task 3e7cc80d-12b3-4174-8468-e3f41b069550 is in state STARTED 2025-10-08 15:52:48.100425 | orchestrator | 2025-10-08 15:52:48 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:52:51.153589 | orchestrator | 2025-10-08 15:52:51 | INFO  | Task c9bc91f9-036b-4c95-94b6-61f7f7562450 is in state STARTED 2025-10-08 15:52:51.156460 | orchestrator | 2025-10-08 15:52:51 | INFO  | Task 5d4969ba-4cce-4e08-aff4-0a4ef3c3fd4c is in state STARTED 2025-10-08 15:52:51.159391 | orchestrator | 2025-10-08 15:52:51 | INFO  | Task 3e7cc80d-12b3-4174-8468-e3f41b069550 is in state STARTED 2025-10-08 15:52:51.159423 | orchestrator | 2025-10-08 15:52:51 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:52:54.205489 | orchestrator | 2025-10-08 15:52:54 | INFO  | Task c9bc91f9-036b-4c95-94b6-61f7f7562450 is in state STARTED 2025-10-08 15:52:54.208292 | orchestrator | 2025-10-08 15:52:54 | INFO  | Task 5d4969ba-4cce-4e08-aff4-0a4ef3c3fd4c is in state STARTED 2025-10-08 15:52:54.210777 | orchestrator | 2025-10-08 15:52:54 | INFO  | Task 3e7cc80d-12b3-4174-8468-e3f41b069550 is in state STARTED 2025-10-08 15:52:54.210802 | orchestrator | 2025-10-08 15:52:54 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:52:57.260393 | orchestrator | 2025-10-08 15:52:57 | INFO  | Task c9bc91f9-036b-4c95-94b6-61f7f7562450 is in state STARTED 2025-10-08 15:52:57.261850 | orchestrator | 2025-10-08 15:52:57 | INFO  | Task 5d4969ba-4cce-4e08-aff4-0a4ef3c3fd4c is in state STARTED 2025-10-08 15:52:57.262206 | orchestrator | 2025-10-08 15:52:57 | INFO  | Task 3e7cc80d-12b3-4174-8468-e3f41b069550 is in state STARTED 2025-10-08 15:52:57.262435 | orchestrator | 2025-10-08 15:52:57 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:53:00.312749 | orchestrator | 2025-10-08 15:53:00 | INFO  | Task c9bc91f9-036b-4c95-94b6-61f7f7562450 is in state STARTED 2025-10-08 15:53:00.314556 | orchestrator | 2025-10-08 15:53:00 | INFO  | Task 5d4969ba-4cce-4e08-aff4-0a4ef3c3fd4c is in state STARTED 2025-10-08 15:53:00.316295 | orchestrator | 2025-10-08 15:53:00 | INFO  | Task 3e7cc80d-12b3-4174-8468-e3f41b069550 is in state STARTED 2025-10-08 15:53:00.316802 | orchestrator | 2025-10-08 15:53:00 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:53:03.366854 | orchestrator | 2025-10-08 15:53:03 | INFO  | Task c9bc91f9-036b-4c95-94b6-61f7f7562450 is in state STARTED 2025-10-08 15:53:03.369028 | orchestrator | 2025-10-08 15:53:03 | INFO  | Task 5d4969ba-4cce-4e08-aff4-0a4ef3c3fd4c is in state STARTED 2025-10-08 15:53:03.372106 | orchestrator | 2025-10-08 15:53:03 | INFO  | Task 3e7cc80d-12b3-4174-8468-e3f41b069550 is in state STARTED 2025-10-08 15:53:03.372142 | orchestrator | 2025-10-08 15:53:03 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:53:06.416734 | orchestrator | 2025-10-08 15:53:06 | INFO  | Task c9bc91f9-036b-4c95-94b6-61f7f7562450 is in state STARTED 2025-10-08 15:53:06.418063 | orchestrator | 2025-10-08 15:53:06 | INFO  | Task 5d4969ba-4cce-4e08-aff4-0a4ef3c3fd4c is in state STARTED 2025-10-08 15:53:06.420385 | orchestrator | 2025-10-08 15:53:06 | INFO  | Task 3e7cc80d-12b3-4174-8468-e3f41b069550 is in state STARTED 2025-10-08 15:53:06.420562 | orchestrator | 2025-10-08 15:53:06 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:53:09.459270 | orchestrator | 2025-10-08 15:53:09 | INFO  | Task c9bc91f9-036b-4c95-94b6-61f7f7562450 is in state STARTED 2025-10-08 15:53:09.460468 | orchestrator | 2025-10-08 15:53:09 | INFO  | Task 5d4969ba-4cce-4e08-aff4-0a4ef3c3fd4c is in state STARTED 2025-10-08 15:53:09.464228 | orchestrator | 2025-10-08 15:53:09 | INFO  | Task 3e7cc80d-12b3-4174-8468-e3f41b069550 is in state SUCCESS 2025-10-08 15:53:09.466642 | orchestrator | 2025-10-08 15:53:09.466680 | orchestrator | 2025-10-08 15:53:09.466693 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2025-10-08 15:53:09.466705 | orchestrator | 2025-10-08 15:53:09.466717 | orchestrator | TASK [Check if ceph keys exist] ************************************************ 2025-10-08 15:53:09.466728 | orchestrator | Wednesday 08 October 2025 15:52:06 +0000 (0:00:00.146) 0:00:00.146 ***** 2025-10-08 15:53:09.466739 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2025-10-08 15:53:09.466752 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-10-08 15:53:09.466763 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-10-08 15:53:09.466773 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2025-10-08 15:53:09.466784 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-10-08 15:53:09.466796 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2025-10-08 15:53:09.466806 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2025-10-08 15:53:09.466817 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2025-10-08 15:53:09.466828 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2025-10-08 15:53:09.466839 | orchestrator | 2025-10-08 15:53:09.466850 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2025-10-08 15:53:09.466861 | orchestrator | Wednesday 08 October 2025 15:52:10 +0000 (0:00:04.580) 0:00:04.727 ***** 2025-10-08 15:53:09.466872 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2025-10-08 15:53:09.466906 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-10-08 15:53:09.466918 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-10-08 15:53:09.466929 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2025-10-08 15:53:09.466940 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-10-08 15:53:09.466950 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2025-10-08 15:53:09.466961 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2025-10-08 15:53:09.466972 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2025-10-08 15:53:09.466983 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2025-10-08 15:53:09.466994 | orchestrator | 2025-10-08 15:53:09.467005 | orchestrator | TASK [Create share directory] ************************************************** 2025-10-08 15:53:09.467016 | orchestrator | Wednesday 08 October 2025 15:52:14 +0000 (0:00:04.020) 0:00:08.747 ***** 2025-10-08 15:53:09.467028 | orchestrator | changed: [testbed-manager -> localhost] 2025-10-08 15:53:09.467039 | orchestrator | 2025-10-08 15:53:09.467050 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2025-10-08 15:53:09.467061 | orchestrator | Wednesday 08 October 2025 15:52:15 +0000 (0:00:00.896) 0:00:09.644 ***** 2025-10-08 15:53:09.467072 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2025-10-08 15:53:09.467083 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-10-08 15:53:09.467093 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-10-08 15:53:09.467105 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2025-10-08 15:53:09.467115 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-10-08 15:53:09.467126 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2025-10-08 15:53:09.467137 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2025-10-08 15:53:09.467148 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2025-10-08 15:53:09.467159 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2025-10-08 15:53:09.467170 | orchestrator | 2025-10-08 15:53:09.467201 | orchestrator | TASK [Check if target directories exist] *************************************** 2025-10-08 15:53:09.467212 | orchestrator | Wednesday 08 October 2025 15:52:28 +0000 (0:00:12.979) 0:00:22.623 ***** 2025-10-08 15:53:09.467222 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/infrastructure/files/ceph) 2025-10-08 15:53:09.467234 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume) 2025-10-08 15:53:09.467256 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2025-10-08 15:53:09.467268 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2025-10-08 15:53:09.467292 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2025-10-08 15:53:09.467306 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2025-10-08 15:53:09.467318 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/glance) 2025-10-08 15:53:09.467331 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/gnocchi) 2025-10-08 15:53:09.467352 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/manila) 2025-10-08 15:53:09.467364 | orchestrator | 2025-10-08 15:53:09.467376 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2025-10-08 15:53:09.467389 | orchestrator | Wednesday 08 October 2025 15:52:32 +0000 (0:00:04.034) 0:00:26.658 ***** 2025-10-08 15:53:09.467402 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2025-10-08 15:53:09.467414 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-10-08 15:53:09.467427 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-10-08 15:53:09.467439 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2025-10-08 15:53:09.467451 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-10-08 15:53:09.467464 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2025-10-08 15:53:09.467475 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2025-10-08 15:53:09.467487 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2025-10-08 15:53:09.467499 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2025-10-08 15:53:09.467512 | orchestrator | 2025-10-08 15:53:09.467524 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-08 15:53:09.467537 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-08 15:53:09.467549 | orchestrator | 2025-10-08 15:53:09.467561 | orchestrator | 2025-10-08 15:53:09.467574 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-08 15:53:09.467586 | orchestrator | Wednesday 08 October 2025 15:52:39 +0000 (0:00:06.914) 0:00:33.572 ***** 2025-10-08 15:53:09.467597 | orchestrator | =============================================================================== 2025-10-08 15:53:09.467607 | orchestrator | Write ceph keys to the share directory --------------------------------- 12.98s 2025-10-08 15:53:09.467618 | orchestrator | Write ceph keys to the configuration directory -------------------------- 6.91s 2025-10-08 15:53:09.467629 | orchestrator | Check if ceph keys exist ------------------------------------------------ 4.58s 2025-10-08 15:53:09.467639 | orchestrator | Check if target directories exist --------------------------------------- 4.03s 2025-10-08 15:53:09.467650 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.02s 2025-10-08 15:53:09.467660 | orchestrator | Create share directory -------------------------------------------------- 0.90s 2025-10-08 15:53:09.467671 | orchestrator | 2025-10-08 15:53:09.467682 | orchestrator | 2025-10-08 15:53:09.467692 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-10-08 15:53:09.467703 | orchestrator | 2025-10-08 15:53:09.467713 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-10-08 15:53:09.467724 | orchestrator | Wednesday 08 October 2025 15:51:17 +0000 (0:00:00.244) 0:00:00.244 ***** 2025-10-08 15:53:09.467735 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:53:09.467746 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:53:09.467756 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:53:09.467767 | orchestrator | 2025-10-08 15:53:09.467778 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-10-08 15:53:09.467788 | orchestrator | Wednesday 08 October 2025 15:51:17 +0000 (0:00:00.285) 0:00:00.529 ***** 2025-10-08 15:53:09.467799 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2025-10-08 15:53:09.467810 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2025-10-08 15:53:09.467821 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2025-10-08 15:53:09.467832 | orchestrator | 2025-10-08 15:53:09.467843 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2025-10-08 15:53:09.467853 | orchestrator | 2025-10-08 15:53:09.467864 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-10-08 15:53:09.467882 | orchestrator | Wednesday 08 October 2025 15:51:17 +0000 (0:00:00.369) 0:00:00.899 ***** 2025-10-08 15:53:09.467893 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-08 15:53:09.467903 | orchestrator | 2025-10-08 15:53:09.467914 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2025-10-08 15:53:09.467925 | orchestrator | Wednesday 08 October 2025 15:51:18 +0000 (0:00:00.499) 0:00:01.398 ***** 2025-10-08 15:53:09.467958 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-10-08 15:53:09.467976 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-10-08 15:53:09.468012 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-10-08 15:53:09.468026 | orchestrator | 2025-10-08 15:53:09.468037 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2025-10-08 15:53:09.468048 | orchestrator | Wednesday 08 October 2025 15:51:19 +0000 (0:00:01.114) 0:00:02.512 ***** 2025-10-08 15:53:09.468059 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:53:09.468070 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:53:09.468081 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:53:09.468092 | orchestrator | 2025-10-08 15:53:09.468103 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-10-08 15:53:09.468114 | orchestrator | Wednesday 08 October 2025 15:51:19 +0000 (0:00:00.466) 0:00:02.979 ***** 2025-10-08 15:53:09.468125 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2025-10-08 15:53:09.468136 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2025-10-08 15:53:09.468147 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2025-10-08 15:53:09.468157 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2025-10-08 15:53:09.468209 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2025-10-08 15:53:09.468221 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2025-10-08 15:53:09.468232 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2025-10-08 15:53:09.468243 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2025-10-08 15:53:09.468254 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2025-10-08 15:53:09.468264 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2025-10-08 15:53:09.468275 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2025-10-08 15:53:09.468286 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2025-10-08 15:53:09.468297 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2025-10-08 15:53:09.468308 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2025-10-08 15:53:09.468319 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2025-10-08 15:53:09.468330 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2025-10-08 15:53:09.468340 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2025-10-08 15:53:09.468351 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2025-10-08 15:53:09.468362 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2025-10-08 15:53:09.468378 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2025-10-08 15:53:09.468389 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2025-10-08 15:53:09.468401 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2025-10-08 15:53:09.468418 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2025-10-08 15:53:09.468430 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2025-10-08 15:53:09.468442 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2025-10-08 15:53:09.468454 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2025-10-08 15:53:09.468466 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2025-10-08 15:53:09.468476 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2025-10-08 15:53:09.468487 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2025-10-08 15:53:09.468498 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2025-10-08 15:53:09.468509 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2025-10-08 15:53:09.468520 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2025-10-08 15:53:09.468531 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2025-10-08 15:53:09.468542 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2025-10-08 15:53:09.468559 | orchestrator | 2025-10-08 15:53:09.468570 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-10-08 15:53:09.468581 | orchestrator | Wednesday 08 October 2025 15:51:20 +0000 (0:00:00.739) 0:00:03.718 ***** 2025-10-08 15:53:09.468592 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:53:09.468603 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:53:09.468614 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:53:09.468625 | orchestrator | 2025-10-08 15:53:09.468635 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-10-08 15:53:09.468646 | orchestrator | Wednesday 08 October 2025 15:51:20 +0000 (0:00:00.291) 0:00:04.009 ***** 2025-10-08 15:53:09.468657 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:53:09.468668 | orchestrator | 2025-10-08 15:53:09.468679 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-10-08 15:53:09.468690 | orchestrator | Wednesday 08 October 2025 15:51:20 +0000 (0:00:00.116) 0:00:04.126 ***** 2025-10-08 15:53:09.468700 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:53:09.468711 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:53:09.468722 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:53:09.468733 | orchestrator | 2025-10-08 15:53:09.468744 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-10-08 15:53:09.468755 | orchestrator | Wednesday 08 October 2025 15:51:21 +0000 (0:00:00.517) 0:00:04.643 ***** 2025-10-08 15:53:09.468765 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:53:09.468776 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:53:09.468787 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:53:09.468798 | orchestrator | 2025-10-08 15:53:09.468808 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-10-08 15:53:09.468819 | orchestrator | Wednesday 08 October 2025 15:51:21 +0000 (0:00:00.301) 0:00:04.945 ***** 2025-10-08 15:53:09.468830 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:53:09.468841 | orchestrator | 2025-10-08 15:53:09.468852 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-10-08 15:53:09.468863 | orchestrator | Wednesday 08 October 2025 15:51:21 +0000 (0:00:00.149) 0:00:05.094 ***** 2025-10-08 15:53:09.468873 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:53:09.468884 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:53:09.468895 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:53:09.468906 | orchestrator | 2025-10-08 15:53:09.468917 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-10-08 15:53:09.468928 | orchestrator | Wednesday 08 October 2025 15:51:22 +0000 (0:00:00.297) 0:00:05.391 ***** 2025-10-08 15:53:09.468938 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:53:09.468949 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:53:09.468960 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:53:09.468971 | orchestrator | 2025-10-08 15:53:09.468981 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-10-08 15:53:09.468992 | orchestrator | Wednesday 08 October 2025 15:51:22 +0000 (0:00:00.348) 0:00:05.740 ***** 2025-10-08 15:53:09.469003 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:53:09.469014 | orchestrator | 2025-10-08 15:53:09.469025 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-10-08 15:53:09.469040 | orchestrator | Wednesday 08 October 2025 15:51:22 +0000 (0:00:00.318) 0:00:06.058 ***** 2025-10-08 15:53:09.469052 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:53:09.469063 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:53:09.469074 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:53:09.469085 | orchestrator | 2025-10-08 15:53:09.469096 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-10-08 15:53:09.469112 | orchestrator | Wednesday 08 October 2025 15:51:23 +0000 (0:00:00.318) 0:00:06.377 ***** 2025-10-08 15:53:09.469123 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:53:09.469134 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:53:09.469153 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:53:09.469164 | orchestrator | 2025-10-08 15:53:09.469220 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-10-08 15:53:09.469233 | orchestrator | Wednesday 08 October 2025 15:51:23 +0000 (0:00:00.337) 0:00:06.715 ***** 2025-10-08 15:53:09.469244 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:53:09.469483 | orchestrator | 2025-10-08 15:53:09.469498 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-10-08 15:53:09.469509 | orchestrator | Wednesday 08 October 2025 15:51:23 +0000 (0:00:00.142) 0:00:06.857 ***** 2025-10-08 15:53:09.469520 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:53:09.469531 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:53:09.469543 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:53:09.469553 | orchestrator | 2025-10-08 15:53:09.469564 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-10-08 15:53:09.469575 | orchestrator | Wednesday 08 October 2025 15:51:23 +0000 (0:00:00.285) 0:00:07.142 ***** 2025-10-08 15:53:09.469586 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:53:09.469597 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:53:09.469607 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:53:09.469618 | orchestrator | 2025-10-08 15:53:09.469629 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-10-08 15:53:09.469640 | orchestrator | Wednesday 08 October 2025 15:51:24 +0000 (0:00:00.518) 0:00:07.661 ***** 2025-10-08 15:53:09.469651 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:53:09.469661 | orchestrator | 2025-10-08 15:53:09.469672 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-10-08 15:53:09.469683 | orchestrator | Wednesday 08 October 2025 15:51:24 +0000 (0:00:00.146) 0:00:07.807 ***** 2025-10-08 15:53:09.469694 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:53:09.469705 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:53:09.469716 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:53:09.469726 | orchestrator | 2025-10-08 15:53:09.469737 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-10-08 15:53:09.469748 | orchestrator | Wednesday 08 October 2025 15:51:24 +0000 (0:00:00.321) 0:00:08.128 ***** 2025-10-08 15:53:09.469759 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:53:09.469770 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:53:09.469781 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:53:09.469791 | orchestrator | 2025-10-08 15:53:09.469802 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-10-08 15:53:09.469813 | orchestrator | Wednesday 08 October 2025 15:51:25 +0000 (0:00:00.404) 0:00:08.533 ***** 2025-10-08 15:53:09.469824 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:53:09.469835 | orchestrator | 2025-10-08 15:53:09.469845 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-10-08 15:53:09.469856 | orchestrator | Wednesday 08 October 2025 15:51:25 +0000 (0:00:00.124) 0:00:08.657 ***** 2025-10-08 15:53:09.469867 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:53:09.469878 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:53:09.469889 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:53:09.469900 | orchestrator | 2025-10-08 15:53:09.469911 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-10-08 15:53:09.469922 | orchestrator | Wednesday 08 October 2025 15:51:25 +0000 (0:00:00.278) 0:00:08.936 ***** 2025-10-08 15:53:09.469933 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:53:09.469944 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:53:09.469955 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:53:09.469966 | orchestrator | 2025-10-08 15:53:09.469976 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-10-08 15:53:09.469987 | orchestrator | Wednesday 08 October 2025 15:51:26 +0000 (0:00:00.520) 0:00:09.457 ***** 2025-10-08 15:53:09.469998 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:53:09.470009 | orchestrator | 2025-10-08 15:53:09.470056 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-10-08 15:53:09.470082 | orchestrator | Wednesday 08 October 2025 15:51:26 +0000 (0:00:00.151) 0:00:09.608 ***** 2025-10-08 15:53:09.470095 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:53:09.470108 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:53:09.470121 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:53:09.470133 | orchestrator | 2025-10-08 15:53:09.470147 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-10-08 15:53:09.470160 | orchestrator | Wednesday 08 October 2025 15:51:26 +0000 (0:00:00.310) 0:00:09.918 ***** 2025-10-08 15:53:09.470193 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:53:09.470206 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:53:09.470218 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:53:09.470231 | orchestrator | 2025-10-08 15:53:09.470243 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-10-08 15:53:09.470256 | orchestrator | Wednesday 08 October 2025 15:51:27 +0000 (0:00:00.337) 0:00:10.256 ***** 2025-10-08 15:53:09.470269 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:53:09.470280 | orchestrator | 2025-10-08 15:53:09.470293 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-10-08 15:53:09.470305 | orchestrator | Wednesday 08 October 2025 15:51:27 +0000 (0:00:00.145) 0:00:10.402 ***** 2025-10-08 15:53:09.470317 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:53:09.470330 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:53:09.470343 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:53:09.470355 | orchestrator | 2025-10-08 15:53:09.470367 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-10-08 15:53:09.470379 | orchestrator | Wednesday 08 October 2025 15:51:27 +0000 (0:00:00.298) 0:00:10.700 ***** 2025-10-08 15:53:09.470392 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:53:09.470403 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:53:09.470414 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:53:09.470424 | orchestrator | 2025-10-08 15:53:09.470448 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-10-08 15:53:09.470460 | orchestrator | Wednesday 08 October 2025 15:51:28 +0000 (0:00:00.632) 0:00:11.332 ***** 2025-10-08 15:53:09.470471 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:53:09.470482 | orchestrator | 2025-10-08 15:53:09.470500 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-10-08 15:53:09.470512 | orchestrator | Wednesday 08 October 2025 15:51:28 +0000 (0:00:00.119) 0:00:11.451 ***** 2025-10-08 15:53:09.470523 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:53:09.470534 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:53:09.470545 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:53:09.470556 | orchestrator | 2025-10-08 15:53:09.470567 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-10-08 15:53:09.470578 | orchestrator | Wednesday 08 October 2025 15:51:28 +0000 (0:00:00.333) 0:00:11.785 ***** 2025-10-08 15:53:09.470589 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:53:09.470600 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:53:09.470611 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:53:09.470622 | orchestrator | 2025-10-08 15:53:09.470633 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-10-08 15:53:09.470644 | orchestrator | Wednesday 08 October 2025 15:51:28 +0000 (0:00:00.357) 0:00:12.142 ***** 2025-10-08 15:53:09.470655 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:53:09.470666 | orchestrator | 2025-10-08 15:53:09.470677 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-10-08 15:53:09.470688 | orchestrator | Wednesday 08 October 2025 15:51:29 +0000 (0:00:00.133) 0:00:12.276 ***** 2025-10-08 15:53:09.470699 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:53:09.470710 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:53:09.470721 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:53:09.470732 | orchestrator | 2025-10-08 15:53:09.470743 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2025-10-08 15:53:09.470763 | orchestrator | Wednesday 08 October 2025 15:51:29 +0000 (0:00:00.510) 0:00:12.786 ***** 2025-10-08 15:53:09.470774 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:53:09.470785 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:53:09.470796 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:53:09.470807 | orchestrator | 2025-10-08 15:53:09.470818 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2025-10-08 15:53:09.470829 | orchestrator | Wednesday 08 October 2025 15:51:31 +0000 (0:00:01.685) 0:00:14.471 ***** 2025-10-08 15:53:09.470840 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-10-08 15:53:09.470851 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-10-08 15:53:09.470862 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-10-08 15:53:09.470873 | orchestrator | 2025-10-08 15:53:09.470884 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2025-10-08 15:53:09.470895 | orchestrator | Wednesday 08 October 2025 15:51:33 +0000 (0:00:02.101) 0:00:16.572 ***** 2025-10-08 15:53:09.470906 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-10-08 15:53:09.470917 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-10-08 15:53:09.470929 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-10-08 15:53:09.470940 | orchestrator | 2025-10-08 15:53:09.470951 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2025-10-08 15:53:09.470962 | orchestrator | Wednesday 08 October 2025 15:51:35 +0000 (0:00:02.069) 0:00:18.641 ***** 2025-10-08 15:53:09.470972 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-10-08 15:53:09.471112 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-10-08 15:53:09.471127 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-10-08 15:53:09.471138 | orchestrator | 2025-10-08 15:53:09.471149 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2025-10-08 15:53:09.471160 | orchestrator | Wednesday 08 October 2025 15:51:37 +0000 (0:00:02.077) 0:00:20.719 ***** 2025-10-08 15:53:09.471188 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:53:09.471200 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:53:09.471211 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:53:09.471222 | orchestrator | 2025-10-08 15:53:09.471233 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2025-10-08 15:53:09.471244 | orchestrator | Wednesday 08 October 2025 15:51:37 +0000 (0:00:00.310) 0:00:21.029 ***** 2025-10-08 15:53:09.471255 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:53:09.471266 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:53:09.471277 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:53:09.471288 | orchestrator | 2025-10-08 15:53:09.471299 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-10-08 15:53:09.471310 | orchestrator | Wednesday 08 October 2025 15:51:38 +0000 (0:00:00.295) 0:00:21.325 ***** 2025-10-08 15:53:09.471321 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-08 15:53:09.471332 | orchestrator | 2025-10-08 15:53:09.471342 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2025-10-08 15:53:09.471353 | orchestrator | Wednesday 08 October 2025 15:51:38 +0000 (0:00:00.758) 0:00:22.083 ***** 2025-10-08 15:53:09.471385 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-10-08 15:53:09.471409 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-10-08 15:53:09.471443 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-10-08 15:53:09.471457 | orchestrator | 2025-10-08 15:53:09.471468 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2025-10-08 15:53:09.471479 | orchestrator | Wednesday 08 October 2025 15:51:40 +0000 (0:00:01.418) 0:00:23.502 ***** 2025-10-08 15:53:09.471503 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-10-08 15:53:09.471524 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:53:09.471536 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-10-08 15:53:09.471548 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:53:09.471574 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-10-08 15:53:09.471593 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:53:09.471604 | orchestrator | 2025-10-08 15:53:09.471615 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2025-10-08 15:53:09.471626 | orchestrator | Wednesday 08 October 2025 15:51:40 +0000 (0:00:00.609) 0:00:24.111 ***** 2025-10-08 15:53:09.471638 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-10-08 15:53:09.471650 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:53:09.471676 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-10-08 15:53:09.471695 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:53:09.471708 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-10-08 15:53:09.471723 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:53:09.471735 | orchestrator | 2025-10-08 15:53:09.471748 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2025-10-08 15:53:09.471760 | orchestrator | Wednesday 08 October 2025 15:51:41 +0000 (0:00:00.800) 0:00:24.911 ***** 2025-10-08 15:53:09.471796 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-10-08 15:53:09.471811 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-10-08 15:53:09.471852 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-10-08 15:53:09.471867 | orchestrator | 2025-10-08 15:53:09.471878 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-10-08 15:53:09.471889 | orchestrator | Wednesday 08 October 2025 15:51:43 +0000 (0:00:01.558) 0:00:26.470 ***** 2025-10-08 15:53:09.471900 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:53:09.471911 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:53:09.471922 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:53:09.471933 | orchestrator | 2025-10-08 15:53:09.471944 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-10-08 15:53:09.471955 | orchestrator | Wednesday 08 October 2025 15:51:43 +0000 (0:00:00.295) 0:00:26.765 ***** 2025-10-08 15:53:09.471966 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-08 15:53:09.471977 | orchestrator | 2025-10-08 15:53:09.471987 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2025-10-08 15:53:09.471998 | orchestrator | Wednesday 08 October 2025 15:51:44 +0000 (0:00:00.487) 0:00:27.252 ***** 2025-10-08 15:53:09.472009 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:53:09.472020 | orchestrator | 2025-10-08 15:53:09.472031 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2025-10-08 15:53:09.472041 | orchestrator | Wednesday 08 October 2025 15:51:46 +0000 (0:00:02.437) 0:00:29.690 ***** 2025-10-08 15:53:09.472052 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:53:09.472063 | orchestrator | 2025-10-08 15:53:09.472074 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2025-10-08 15:53:09.472091 | orchestrator | Wednesday 08 October 2025 15:51:49 +0000 (0:00:02.594) 0:00:32.284 ***** 2025-10-08 15:53:09.472102 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:53:09.472113 | orchestrator | 2025-10-08 15:53:09.472124 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-10-08 15:53:09.472135 | orchestrator | Wednesday 08 October 2025 15:52:05 +0000 (0:00:16.405) 0:00:48.690 ***** 2025-10-08 15:53:09.472145 | orchestrator | 2025-10-08 15:53:09.472156 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-10-08 15:53:09.472167 | orchestrator | Wednesday 08 October 2025 15:52:05 +0000 (0:00:00.060) 0:00:48.751 ***** 2025-10-08 15:53:09.472236 | orchestrator | 2025-10-08 15:53:09.472247 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-10-08 15:53:09.472258 | orchestrator | Wednesday 08 October 2025 15:52:05 +0000 (0:00:00.061) 0:00:48.812 ***** 2025-10-08 15:53:09.472269 | orchestrator | 2025-10-08 15:53:09.472280 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2025-10-08 15:53:09.472291 | orchestrator | Wednesday 08 October 2025 15:52:05 +0000 (0:00:00.073) 0:00:48.885 ***** 2025-10-08 15:53:09.472301 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:53:09.472312 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:53:09.472324 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:53:09.472334 | orchestrator | 2025-10-08 15:53:09.472345 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-08 15:53:09.472356 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2025-10-08 15:53:09.472373 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-10-08 15:53:09.472384 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-10-08 15:53:09.472395 | orchestrator | 2025-10-08 15:53:09.472406 | orchestrator | 2025-10-08 15:53:09.472423 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-08 15:53:09.472435 | orchestrator | Wednesday 08 October 2025 15:53:06 +0000 (0:01:00.872) 0:01:49.758 ***** 2025-10-08 15:53:09.472445 | orchestrator | =============================================================================== 2025-10-08 15:53:09.472456 | orchestrator | horizon : Restart horizon container ------------------------------------ 60.87s 2025-10-08 15:53:09.472467 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 16.41s 2025-10-08 15:53:09.472478 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.59s 2025-10-08 15:53:09.472489 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.44s 2025-10-08 15:53:09.472499 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 2.10s 2025-10-08 15:53:09.472510 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 2.08s 2025-10-08 15:53:09.472521 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.07s 2025-10-08 15:53:09.472532 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.69s 2025-10-08 15:53:09.472541 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.56s 2025-10-08 15:53:09.472685 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.42s 2025-10-08 15:53:09.472696 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.11s 2025-10-08 15:53:09.472705 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 0.80s 2025-10-08 15:53:09.472715 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.76s 2025-10-08 15:53:09.472725 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.74s 2025-10-08 15:53:09.472734 | orchestrator | horizon : Update policy file name --------------------------------------- 0.63s 2025-10-08 15:53:09.472753 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.61s 2025-10-08 15:53:09.472763 | orchestrator | horizon : Update policy file name --------------------------------------- 0.52s 2025-10-08 15:53:09.472773 | orchestrator | horizon : Update policy file name --------------------------------------- 0.52s 2025-10-08 15:53:09.472783 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.52s 2025-10-08 15:53:09.472792 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.51s 2025-10-08 15:53:09.472802 | orchestrator | 2025-10-08 15:53:09 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:53:12.504264 | orchestrator | 2025-10-08 15:53:12 | INFO  | Task c9bc91f9-036b-4c95-94b6-61f7f7562450 is in state STARTED 2025-10-08 15:53:12.509447 | orchestrator | 2025-10-08 15:53:12 | INFO  | Task 5d4969ba-4cce-4e08-aff4-0a4ef3c3fd4c is in state STARTED 2025-10-08 15:53:12.509849 | orchestrator | 2025-10-08 15:53:12 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:53:15.554751 | orchestrator | 2025-10-08 15:53:15 | INFO  | Task c9bc91f9-036b-4c95-94b6-61f7f7562450 is in state STARTED 2025-10-08 15:53:15.556575 | orchestrator | 2025-10-08 15:53:15 | INFO  | Task 5d4969ba-4cce-4e08-aff4-0a4ef3c3fd4c is in state STARTED 2025-10-08 15:53:15.556608 | orchestrator | 2025-10-08 15:53:15 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:53:18.606691 | orchestrator | 2025-10-08 15:53:18 | INFO  | Task c9bc91f9-036b-4c95-94b6-61f7f7562450 is in state STARTED 2025-10-08 15:53:18.607931 | orchestrator | 2025-10-08 15:53:18 | INFO  | Task 5d4969ba-4cce-4e08-aff4-0a4ef3c3fd4c is in state STARTED 2025-10-08 15:53:18.608527 | orchestrator | 2025-10-08 15:53:18 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:53:21.650618 | orchestrator | 2025-10-08 15:53:21 | INFO  | Task c9bc91f9-036b-4c95-94b6-61f7f7562450 is in state STARTED 2025-10-08 15:53:21.654499 | orchestrator | 2025-10-08 15:53:21 | INFO  | Task 5d4969ba-4cce-4e08-aff4-0a4ef3c3fd4c is in state STARTED 2025-10-08 15:53:21.654590 | orchestrator | 2025-10-08 15:53:21 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:53:24.690094 | orchestrator | 2025-10-08 15:53:24 | INFO  | Task c9bc91f9-036b-4c95-94b6-61f7f7562450 is in state STARTED 2025-10-08 15:53:24.690692 | orchestrator | 2025-10-08 15:53:24 | INFO  | Task 5d4969ba-4cce-4e08-aff4-0a4ef3c3fd4c is in state STARTED 2025-10-08 15:53:24.690991 | orchestrator | 2025-10-08 15:53:24 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:53:27.740797 | orchestrator | 2025-10-08 15:53:27 | INFO  | Task c9bc91f9-036b-4c95-94b6-61f7f7562450 is in state STARTED 2025-10-08 15:53:27.742461 | orchestrator | 2025-10-08 15:53:27 | INFO  | Task 5d4969ba-4cce-4e08-aff4-0a4ef3c3fd4c is in state STARTED 2025-10-08 15:53:27.742513 | orchestrator | 2025-10-08 15:53:27 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:53:30.786686 | orchestrator | 2025-10-08 15:53:30 | INFO  | Task c9bc91f9-036b-4c95-94b6-61f7f7562450 is in state STARTED 2025-10-08 15:53:30.790407 | orchestrator | 2025-10-08 15:53:30 | INFO  | Task 5d4969ba-4cce-4e08-aff4-0a4ef3c3fd4c is in state STARTED 2025-10-08 15:53:30.791642 | orchestrator | 2025-10-08 15:53:30 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:53:33.838335 | orchestrator | 2025-10-08 15:53:33 | INFO  | Task c9bc91f9-036b-4c95-94b6-61f7f7562450 is in state STARTED 2025-10-08 15:53:33.840150 | orchestrator | 2025-10-08 15:53:33 | INFO  | Task 5d4969ba-4cce-4e08-aff4-0a4ef3c3fd4c is in state STARTED 2025-10-08 15:53:33.840207 | orchestrator | 2025-10-08 15:53:33 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:53:36.887473 | orchestrator | 2025-10-08 15:53:36 | INFO  | Task c9bc91f9-036b-4c95-94b6-61f7f7562450 is in state STARTED 2025-10-08 15:53:36.889565 | orchestrator | 2025-10-08 15:53:36 | INFO  | Task 5d4969ba-4cce-4e08-aff4-0a4ef3c3fd4c is in state STARTED 2025-10-08 15:53:36.890496 | orchestrator | 2025-10-08 15:53:36 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:53:39.936548 | orchestrator | 2025-10-08 15:53:39 | INFO  | Task c9bc91f9-036b-4c95-94b6-61f7f7562450 is in state STARTED 2025-10-08 15:53:39.937248 | orchestrator | 2025-10-08 15:53:39 | INFO  | Task 5d4969ba-4cce-4e08-aff4-0a4ef3c3fd4c is in state STARTED 2025-10-08 15:53:39.937281 | orchestrator | 2025-10-08 15:53:39 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:53:42.975615 | orchestrator | 2025-10-08 15:53:42 | INFO  | Task c9bc91f9-036b-4c95-94b6-61f7f7562450 is in state SUCCESS 2025-10-08 15:53:42.976455 | orchestrator | 2025-10-08 15:53:42 | INFO  | Task 756195b4-1aff-413c-bbe4-c2a24a419f05 is in state STARTED 2025-10-08 15:53:42.977276 | orchestrator | 2025-10-08 15:53:42 | INFO  | Task 6b0e5e14-3054-485c-a6fb-0dfeb7c571ff is in state STARTED 2025-10-08 15:53:42.978096 | orchestrator | 2025-10-08 15:53:42 | INFO  | Task 62a149b4-79d4-423b-ba45-6d6d8896df4a is in state STARTED 2025-10-08 15:53:42.981686 | orchestrator | 2025-10-08 15:53:42 | INFO  | Task 5d4969ba-4cce-4e08-aff4-0a4ef3c3fd4c is in state STARTED 2025-10-08 15:53:42.981717 | orchestrator | 2025-10-08 15:53:42 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:53:46.029540 | orchestrator | 2025-10-08 15:53:46 | INFO  | Task 756195b4-1aff-413c-bbe4-c2a24a419f05 is in state STARTED 2025-10-08 15:53:46.029636 | orchestrator | 2025-10-08 15:53:46 | INFO  | Task 6b0e5e14-3054-485c-a6fb-0dfeb7c571ff is in state STARTED 2025-10-08 15:53:46.031128 | orchestrator | 2025-10-08 15:53:46 | INFO  | Task 62a149b4-79d4-423b-ba45-6d6d8896df4a is in state STARTED 2025-10-08 15:53:46.031901 | orchestrator | 2025-10-08 15:53:46 | INFO  | Task 5d4969ba-4cce-4e08-aff4-0a4ef3c3fd4c is in state STARTED 2025-10-08 15:53:46.031925 | orchestrator | 2025-10-08 15:53:46 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:53:49.059970 | orchestrator | 2025-10-08 15:53:49 | INFO  | Task 9b68a156-a003-4645-b7a2-40c5902f991a is in state STARTED 2025-10-08 15:53:49.060762 | orchestrator | 2025-10-08 15:53:49 | INFO  | Task 756195b4-1aff-413c-bbe4-c2a24a419f05 is in state STARTED 2025-10-08 15:53:49.061443 | orchestrator | 2025-10-08 15:53:49 | INFO  | Task 6fb12360-74dd-4fbc-a797-e3482d23c8e6 is in state STARTED 2025-10-08 15:53:49.062151 | orchestrator | 2025-10-08 15:53:49 | INFO  | Task 6b0e5e14-3054-485c-a6fb-0dfeb7c571ff is in state STARTED 2025-10-08 15:53:49.063983 | orchestrator | 2025-10-08 15:53:49 | INFO  | Task 62a149b4-79d4-423b-ba45-6d6d8896df4a is in state SUCCESS 2025-10-08 15:53:49.064754 | orchestrator | 2025-10-08 15:53:49 | INFO  | Task 5d4969ba-4cce-4e08-aff4-0a4ef3c3fd4c is in state STARTED 2025-10-08 15:53:49.065641 | orchestrator | 2025-10-08 15:53:49 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:53:52.093914 | orchestrator | 2025-10-08 15:53:52 | INFO  | Task 9b68a156-a003-4645-b7a2-40c5902f991a is in state STARTED 2025-10-08 15:53:52.094226 | orchestrator | 2025-10-08 15:53:52 | INFO  | Task 756195b4-1aff-413c-bbe4-c2a24a419f05 is in state STARTED 2025-10-08 15:53:52.094270 | orchestrator | 2025-10-08 15:53:52 | INFO  | Task 6fb12360-74dd-4fbc-a797-e3482d23c8e6 is in state STARTED 2025-10-08 15:53:52.095008 | orchestrator | 2025-10-08 15:53:52 | INFO  | Task 6b0e5e14-3054-485c-a6fb-0dfeb7c571ff is in state STARTED 2025-10-08 15:53:52.095658 | orchestrator | 2025-10-08 15:53:52 | INFO  | Task 5d4969ba-4cce-4e08-aff4-0a4ef3c3fd4c is in state STARTED 2025-10-08 15:53:52.095722 | orchestrator | 2025-10-08 15:53:52 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:53:55.138417 | orchestrator | 2025-10-08 15:53:55 | INFO  | Task 9b68a156-a003-4645-b7a2-40c5902f991a is in state STARTED 2025-10-08 15:53:55.140071 | orchestrator | 2025-10-08 15:53:55 | INFO  | Task 756195b4-1aff-413c-bbe4-c2a24a419f05 is in state STARTED 2025-10-08 15:53:55.140869 | orchestrator | 2025-10-08 15:53:55 | INFO  | Task 6fb12360-74dd-4fbc-a797-e3482d23c8e6 is in state STARTED 2025-10-08 15:53:55.144513 | orchestrator | 2025-10-08 15:53:55 | INFO  | Task 6b0e5e14-3054-485c-a6fb-0dfeb7c571ff is in state STARTED 2025-10-08 15:53:55.145818 | orchestrator | 2025-10-08 15:53:55 | INFO  | Task 5d4969ba-4cce-4e08-aff4-0a4ef3c3fd4c is in state STARTED 2025-10-08 15:53:55.145844 | orchestrator | 2025-10-08 15:53:55 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:53:58.191445 | orchestrator | 2025-10-08 15:53:58 | INFO  | Task 9b68a156-a003-4645-b7a2-40c5902f991a is in state STARTED 2025-10-08 15:53:58.192725 | orchestrator | 2025-10-08 15:53:58 | INFO  | Task 756195b4-1aff-413c-bbe4-c2a24a419f05 is in state STARTED 2025-10-08 15:53:58.195982 | orchestrator | 2025-10-08 15:53:58 | INFO  | Task 6fb12360-74dd-4fbc-a797-e3482d23c8e6 is in state STARTED 2025-10-08 15:53:58.198994 | orchestrator | 2025-10-08 15:53:58 | INFO  | Task 6b0e5e14-3054-485c-a6fb-0dfeb7c571ff is in state STARTED 2025-10-08 15:53:58.201300 | orchestrator | 2025-10-08 15:53:58 | INFO  | Task 5d4969ba-4cce-4e08-aff4-0a4ef3c3fd4c is in state STARTED 2025-10-08 15:53:58.201516 | orchestrator | 2025-10-08 15:53:58 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:54:01.235808 | orchestrator | 2025-10-08 15:54:01 | INFO  | Task 9b68a156-a003-4645-b7a2-40c5902f991a is in state STARTED 2025-10-08 15:54:01.236084 | orchestrator | 2025-10-08 15:54:01 | INFO  | Task 756195b4-1aff-413c-bbe4-c2a24a419f05 is in state STARTED 2025-10-08 15:54:01.237522 | orchestrator | 2025-10-08 15:54:01 | INFO  | Task 6fb12360-74dd-4fbc-a797-e3482d23c8e6 is in state STARTED 2025-10-08 15:54:01.239822 | orchestrator | 2025-10-08 15:54:01 | INFO  | Task 6b0e5e14-3054-485c-a6fb-0dfeb7c571ff is in state STARTED 2025-10-08 15:54:01.240543 | orchestrator | 2025-10-08 15:54:01 | INFO  | Task 5d4969ba-4cce-4e08-aff4-0a4ef3c3fd4c is in state STARTED 2025-10-08 15:54:01.240577 | orchestrator | 2025-10-08 15:54:01 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:54:04.283734 | orchestrator | 2025-10-08 15:54:04 | INFO  | Task 9b68a156-a003-4645-b7a2-40c5902f991a is in state STARTED 2025-10-08 15:54:04.284408 | orchestrator | 2025-10-08 15:54:04 | INFO  | Task 756195b4-1aff-413c-bbe4-c2a24a419f05 is in state STARTED 2025-10-08 15:54:04.285024 | orchestrator | 2025-10-08 15:54:04 | INFO  | Task 6fb12360-74dd-4fbc-a797-e3482d23c8e6 is in state STARTED 2025-10-08 15:54:04.285969 | orchestrator | 2025-10-08 15:54:04 | INFO  | Task 6b0e5e14-3054-485c-a6fb-0dfeb7c571ff is in state STARTED 2025-10-08 15:54:04.286863 | orchestrator | 2025-10-08 15:54:04 | INFO  | Task 5d4969ba-4cce-4e08-aff4-0a4ef3c3fd4c is in state STARTED 2025-10-08 15:54:04.286886 | orchestrator | 2025-10-08 15:54:04 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:54:07.329075 | orchestrator | 2025-10-08 15:54:07 | INFO  | Task 9b68a156-a003-4645-b7a2-40c5902f991a is in state STARTED 2025-10-08 15:54:07.331141 | orchestrator | 2025-10-08 15:54:07 | INFO  | Task 756195b4-1aff-413c-bbe4-c2a24a419f05 is in state STARTED 2025-10-08 15:54:07.332523 | orchestrator | 2025-10-08 15:54:07 | INFO  | Task 6fb12360-74dd-4fbc-a797-e3482d23c8e6 is in state STARTED 2025-10-08 15:54:07.334517 | orchestrator | 2025-10-08 15:54:07 | INFO  | Task 6b0e5e14-3054-485c-a6fb-0dfeb7c571ff is in state STARTED 2025-10-08 15:54:07.335881 | orchestrator | 2025-10-08 15:54:07 | INFO  | Task 5d4969ba-4cce-4e08-aff4-0a4ef3c3fd4c is in state STARTED 2025-10-08 15:54:07.335911 | orchestrator | 2025-10-08 15:54:07 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:54:10.376931 | orchestrator | 2025-10-08 15:54:10 | INFO  | Task 9b68a156-a003-4645-b7a2-40c5902f991a is in state STARTED 2025-10-08 15:54:10.382900 | orchestrator | 2025-10-08 15:54:10 | INFO  | Task 756195b4-1aff-413c-bbe4-c2a24a419f05 is in state STARTED 2025-10-08 15:54:10.386128 | orchestrator | 2025-10-08 15:54:10 | INFO  | Task 6fb12360-74dd-4fbc-a797-e3482d23c8e6 is in state STARTED 2025-10-08 15:54:10.388762 | orchestrator | 2025-10-08 15:54:10 | INFO  | Task 6b0e5e14-3054-485c-a6fb-0dfeb7c571ff is in state STARTED 2025-10-08 15:54:10.390825 | orchestrator | 2025-10-08 15:54:10 | INFO  | Task 5d4969ba-4cce-4e08-aff4-0a4ef3c3fd4c is in state STARTED 2025-10-08 15:54:10.390848 | orchestrator | 2025-10-08 15:54:10 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:54:13.423711 | orchestrator | 2025-10-08 15:54:13 | INFO  | Task af272394-32d4-40a7-b33d-073343bea7a9 is in state STARTED 2025-10-08 15:54:13.424080 | orchestrator | 2025-10-08 15:54:13 | INFO  | Task 9b68a156-a003-4645-b7a2-40c5902f991a is in state STARTED 2025-10-08 15:54:13.426253 | orchestrator | 2025-10-08 15:54:13 | INFO  | Task 756195b4-1aff-413c-bbe4-c2a24a419f05 is in state STARTED 2025-10-08 15:54:13.426303 | orchestrator | 2025-10-08 15:54:13 | INFO  | Task 6fb12360-74dd-4fbc-a797-e3482d23c8e6 is in state STARTED 2025-10-08 15:54:13.427232 | orchestrator | 2025-10-08 15:54:13 | INFO  | Task 6b0e5e14-3054-485c-a6fb-0dfeb7c571ff is in state STARTED 2025-10-08 15:54:13.429872 | orchestrator | 2025-10-08 15:54:13 | INFO  | Task 5d4969ba-4cce-4e08-aff4-0a4ef3c3fd4c is in state SUCCESS 2025-10-08 15:54:13.431681 | orchestrator | 2025-10-08 15:54:13.431714 | orchestrator | 2025-10-08 15:54:13.431727 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2025-10-08 15:54:13.431739 | orchestrator | 2025-10-08 15:54:13.431750 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2025-10-08 15:54:13.431762 | orchestrator | Wednesday 08 October 2025 15:52:44 +0000 (0:00:00.225) 0:00:00.225 ***** 2025-10-08 15:54:13.431774 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2025-10-08 15:54:13.431787 | orchestrator | 2025-10-08 15:54:13.431798 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2025-10-08 15:54:13.431809 | orchestrator | Wednesday 08 October 2025 15:52:44 +0000 (0:00:00.228) 0:00:00.454 ***** 2025-10-08 15:54:13.431821 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2025-10-08 15:54:13.431832 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2025-10-08 15:54:13.431843 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2025-10-08 15:54:13.431854 | orchestrator | 2025-10-08 15:54:13.431865 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2025-10-08 15:54:13.431876 | orchestrator | Wednesday 08 October 2025 15:52:45 +0000 (0:00:01.293) 0:00:01.748 ***** 2025-10-08 15:54:13.431887 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2025-10-08 15:54:13.431898 | orchestrator | 2025-10-08 15:54:13.431908 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2025-10-08 15:54:13.431919 | orchestrator | Wednesday 08 October 2025 15:52:47 +0000 (0:00:01.451) 0:00:03.199 ***** 2025-10-08 15:54:13.431930 | orchestrator | changed: [testbed-manager] 2025-10-08 15:54:13.431964 | orchestrator | 2025-10-08 15:54:13.431976 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2025-10-08 15:54:13.431986 | orchestrator | Wednesday 08 October 2025 15:52:48 +0000 (0:00:00.884) 0:00:04.084 ***** 2025-10-08 15:54:13.431997 | orchestrator | changed: [testbed-manager] 2025-10-08 15:54:13.432008 | orchestrator | 2025-10-08 15:54:13.432018 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2025-10-08 15:54:13.432029 | orchestrator | Wednesday 08 October 2025 15:52:48 +0000 (0:00:00.960) 0:00:05.044 ***** 2025-10-08 15:54:13.432040 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2025-10-08 15:54:13.432051 | orchestrator | ok: [testbed-manager] 2025-10-08 15:54:13.432061 | orchestrator | 2025-10-08 15:54:13.432072 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2025-10-08 15:54:13.432082 | orchestrator | Wednesday 08 October 2025 15:53:30 +0000 (0:00:41.433) 0:00:46.478 ***** 2025-10-08 15:54:13.432093 | orchestrator | changed: [testbed-manager] => (item=ceph) 2025-10-08 15:54:13.432104 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2025-10-08 15:54:13.432115 | orchestrator | changed: [testbed-manager] => (item=rados) 2025-10-08 15:54:13.432126 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2025-10-08 15:54:13.432137 | orchestrator | changed: [testbed-manager] => (item=rbd) 2025-10-08 15:54:13.432695 | orchestrator | 2025-10-08 15:54:13.432716 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2025-10-08 15:54:13.432727 | orchestrator | Wednesday 08 October 2025 15:53:34 +0000 (0:00:04.297) 0:00:50.776 ***** 2025-10-08 15:54:13.432738 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2025-10-08 15:54:13.432749 | orchestrator | 2025-10-08 15:54:13.432760 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2025-10-08 15:54:13.432770 | orchestrator | Wednesday 08 October 2025 15:53:35 +0000 (0:00:00.462) 0:00:51.238 ***** 2025-10-08 15:54:13.432781 | orchestrator | skipping: [testbed-manager] 2025-10-08 15:54:13.432792 | orchestrator | 2025-10-08 15:54:13.432802 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2025-10-08 15:54:13.432813 | orchestrator | Wednesday 08 October 2025 15:53:35 +0000 (0:00:00.132) 0:00:51.370 ***** 2025-10-08 15:54:13.432824 | orchestrator | skipping: [testbed-manager] 2025-10-08 15:54:13.432835 | orchestrator | 2025-10-08 15:54:13.432846 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2025-10-08 15:54:13.432856 | orchestrator | Wednesday 08 October 2025 15:53:35 +0000 (0:00:00.531) 0:00:51.902 ***** 2025-10-08 15:54:13.432867 | orchestrator | changed: [testbed-manager] 2025-10-08 15:54:13.432878 | orchestrator | 2025-10-08 15:54:13.432889 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2025-10-08 15:54:13.432899 | orchestrator | Wednesday 08 October 2025 15:53:37 +0000 (0:00:01.591) 0:00:53.494 ***** 2025-10-08 15:54:13.432910 | orchestrator | changed: [testbed-manager] 2025-10-08 15:54:13.432921 | orchestrator | 2025-10-08 15:54:13.432946 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2025-10-08 15:54:13.432957 | orchestrator | Wednesday 08 October 2025 15:53:38 +0000 (0:00:00.794) 0:00:54.288 ***** 2025-10-08 15:54:13.432968 | orchestrator | changed: [testbed-manager] 2025-10-08 15:54:13.432979 | orchestrator | 2025-10-08 15:54:13.432989 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2025-10-08 15:54:13.433000 | orchestrator | Wednesday 08 October 2025 15:53:38 +0000 (0:00:00.654) 0:00:54.942 ***** 2025-10-08 15:54:13.433011 | orchestrator | ok: [testbed-manager] => (item=ceph) 2025-10-08 15:54:13.433021 | orchestrator | ok: [testbed-manager] => (item=rados) 2025-10-08 15:54:13.433032 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2025-10-08 15:54:13.433043 | orchestrator | ok: [testbed-manager] => (item=rbd) 2025-10-08 15:54:13.433054 | orchestrator | 2025-10-08 15:54:13.433065 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-08 15:54:13.433086 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-08 15:54:13.433097 | orchestrator | 2025-10-08 15:54:13.433108 | orchestrator | 2025-10-08 15:54:13.433178 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-08 15:54:13.433192 | orchestrator | Wednesday 08 October 2025 15:53:40 +0000 (0:00:01.588) 0:00:56.531 ***** 2025-10-08 15:54:13.433203 | orchestrator | =============================================================================== 2025-10-08 15:54:13.433214 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 41.43s 2025-10-08 15:54:13.433225 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.30s 2025-10-08 15:54:13.433236 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.59s 2025-10-08 15:54:13.433246 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.59s 2025-10-08 15:54:13.433257 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.45s 2025-10-08 15:54:13.433267 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.29s 2025-10-08 15:54:13.433278 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.96s 2025-10-08 15:54:13.433289 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.88s 2025-10-08 15:54:13.433300 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.79s 2025-10-08 15:54:13.433310 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.65s 2025-10-08 15:54:13.433321 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.53s 2025-10-08 15:54:13.433332 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.46s 2025-10-08 15:54:13.433343 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.23s 2025-10-08 15:54:13.433353 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.13s 2025-10-08 15:54:13.433364 | orchestrator | 2025-10-08 15:54:13.433375 | orchestrator | 2025-10-08 15:54:13.433385 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-10-08 15:54:13.433396 | orchestrator | 2025-10-08 15:54:13.433406 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-10-08 15:54:13.433417 | orchestrator | Wednesday 08 October 2025 15:53:44 +0000 (0:00:00.178) 0:00:00.178 ***** 2025-10-08 15:54:13.433428 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:54:13.433439 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:54:13.433450 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:54:13.433460 | orchestrator | 2025-10-08 15:54:13.433471 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-10-08 15:54:13.433482 | orchestrator | Wednesday 08 October 2025 15:53:44 +0000 (0:00:00.252) 0:00:00.431 ***** 2025-10-08 15:54:13.433492 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-10-08 15:54:13.433503 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-10-08 15:54:13.433514 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-10-08 15:54:13.433525 | orchestrator | 2025-10-08 15:54:13.433536 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2025-10-08 15:54:13.433547 | orchestrator | 2025-10-08 15:54:13.433557 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2025-10-08 15:54:13.433568 | orchestrator | Wednesday 08 October 2025 15:53:45 +0000 (0:00:00.715) 0:00:01.147 ***** 2025-10-08 15:54:13.433578 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:54:13.433589 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:54:13.433600 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:54:13.433611 | orchestrator | 2025-10-08 15:54:13.433621 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-08 15:54:13.433633 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-08 15:54:13.433653 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-08 15:54:13.433664 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-08 15:54:13.433675 | orchestrator | 2025-10-08 15:54:13.433686 | orchestrator | 2025-10-08 15:54:13.433696 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-08 15:54:13.433707 | orchestrator | Wednesday 08 October 2025 15:53:46 +0000 (0:00:00.778) 0:00:01.925 ***** 2025-10-08 15:54:13.433718 | orchestrator | =============================================================================== 2025-10-08 15:54:13.433729 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 0.78s 2025-10-08 15:54:13.433745 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.72s 2025-10-08 15:54:13.433756 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.25s 2025-10-08 15:54:13.433767 | orchestrator | 2025-10-08 15:54:13.433778 | orchestrator | 2025-10-08 15:54:13.433789 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-10-08 15:54:13.433800 | orchestrator | 2025-10-08 15:54:13.433810 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-10-08 15:54:13.433821 | orchestrator | Wednesday 08 October 2025 15:51:17 +0000 (0:00:00.301) 0:00:00.301 ***** 2025-10-08 15:54:13.433831 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:54:13.433842 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:54:13.433853 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:54:13.433864 | orchestrator | 2025-10-08 15:54:13.433874 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-10-08 15:54:13.433885 | orchestrator | Wednesday 08 October 2025 15:51:17 +0000 (0:00:00.263) 0:00:00.564 ***** 2025-10-08 15:54:13.433896 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-10-08 15:54:13.433906 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-10-08 15:54:13.433917 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-10-08 15:54:13.433928 | orchestrator | 2025-10-08 15:54:13.433939 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2025-10-08 15:54:13.433950 | orchestrator | 2025-10-08 15:54:13.433995 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-10-08 15:54:13.434008 | orchestrator | Wednesday 08 October 2025 15:51:17 +0000 (0:00:00.370) 0:00:00.935 ***** 2025-10-08 15:54:13.434070 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-08 15:54:13.434084 | orchestrator | 2025-10-08 15:54:13.434095 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2025-10-08 15:54:13.434106 | orchestrator | Wednesday 08 October 2025 15:51:18 +0000 (0:00:00.485) 0:00:01.421 ***** 2025-10-08 15:54:13.434124 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-10-08 15:54:13.434140 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-10-08 15:54:13.434186 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-10-08 15:54:13.434200 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-10-08 15:54:13.434252 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-10-08 15:54:13.434266 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-10-08 15:54:13.434278 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-10-08 15:54:13.434300 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-10-08 15:54:13.434312 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-10-08 15:54:13.434323 | orchestrator | 2025-10-08 15:54:13.434334 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2025-10-08 15:54:13.434345 | orchestrator | Wednesday 08 October 2025 15:51:20 +0000 (0:00:01.793) 0:00:03.214 ***** 2025-10-08 15:54:13.434361 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=/opt/configuration/environments/kolla/files/overlays/keystone/policy.yaml) 2025-10-08 15:54:13.434373 | orchestrator | 2025-10-08 15:54:13.434384 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2025-10-08 15:54:13.434394 | orchestrator | Wednesday 08 October 2025 15:51:20 +0000 (0:00:00.833) 0:00:04.047 ***** 2025-10-08 15:54:13.434405 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:54:13.434416 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:54:13.434427 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:54:13.434438 | orchestrator | 2025-10-08 15:54:13.434448 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2025-10-08 15:54:13.434459 | orchestrator | Wednesday 08 October 2025 15:51:21 +0000 (0:00:00.523) 0:00:04.571 ***** 2025-10-08 15:54:13.434470 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-10-08 15:54:13.434481 | orchestrator | 2025-10-08 15:54:13.434492 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-10-08 15:54:13.434502 | orchestrator | Wednesday 08 October 2025 15:51:22 +0000 (0:00:00.726) 0:00:05.297 ***** 2025-10-08 15:54:13.434513 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-08 15:54:13.434524 | orchestrator | 2025-10-08 15:54:13.434541 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2025-10-08 15:54:13.434552 | orchestrator | Wednesday 08 October 2025 15:51:22 +0000 (0:00:00.557) 0:00:05.855 ***** 2025-10-08 15:54:13.434564 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-10-08 15:54:13.434585 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-10-08 15:54:13.434597 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-10-08 15:54:13.434615 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-10-08 15:54:13.434637 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-10-08 15:54:13.434650 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-10-08 15:54:13.434667 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-10-08 15:54:13.434678 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-10-08 15:54:13.434690 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-10-08 15:54:13.434701 | orchestrator | 2025-10-08 15:54:13.434712 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2025-10-08 15:54:13.434724 | orchestrator | Wednesday 08 October 2025 15:51:26 +0000 (0:00:03.410) 0:00:09.265 ***** 2025-10-08 15:54:13.434740 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-10-08 15:54:13.434759 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-10-08 15:54:13.434779 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-10-08 15:54:13.434790 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:54:13.434802 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-10-08 15:54:13.434814 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-10-08 15:54:13.434836 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-10-08 15:54:13.434848 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:54:13.434866 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-10-08 15:54:13.434885 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-10-08 15:54:13.434897 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-10-08 15:54:13.434908 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:54:13.434919 | orchestrator | 2025-10-08 15:54:13.434930 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2025-10-08 15:54:13.434941 | orchestrator | Wednesday 08 October 2025 15:51:26 +0000 (0:00:00.827) 0:00:10.093 ***** 2025-10-08 15:54:13.434953 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-10-08 15:54:13.434970 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-10-08 15:54:13.434981 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-10-08 15:54:13.434999 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:54:13.435018 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-10-08 15:54:13.435031 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-10-08 15:54:13.435042 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-10-08 15:54:13.435053 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:54:13.435065 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-10-08 15:54:13.435082 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-10-08 15:54:13.435107 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-10-08 15:54:13.435119 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:54:13.435130 | orchestrator | 2025-10-08 15:54:13.435142 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2025-10-08 15:54:13.435199 | orchestrator | Wednesday 08 October 2025 15:51:27 +0000 (0:00:00.765) 0:00:10.858 ***** 2025-10-08 15:54:13.435212 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-10-08 15:54:13.435225 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-10-08 15:54:13.435243 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-10-08 15:54:13.435271 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-10-08 15:54:13.435283 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-10-08 15:54:13.435294 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-10-08 15:54:13.435306 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-10-08 15:54:13.435317 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-10-08 15:54:13.435329 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-10-08 15:54:13.435340 | orchestrator | 2025-10-08 15:54:13.435351 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2025-10-08 15:54:13.435367 | orchestrator | Wednesday 08 October 2025 15:51:30 +0000 (0:00:03.224) 0:00:14.083 ***** 2025-10-08 15:54:13.435395 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-10-08 15:54:13.435407 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-10-08 15:54:13.435419 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-10-08 15:54:13.435431 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-10-08 15:54:13.435447 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-10-08 15:54:13.435466 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-10-08 15:54:13.435485 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-10-08 15:54:13.435496 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-10-08 15:54:13.435508 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-10-08 15:54:13.435519 | orchestrator | 2025-10-08 15:54:13.435530 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2025-10-08 15:54:13.435541 | orchestrator | Wednesday 08 October 2025 15:51:36 +0000 (0:00:05.446) 0:00:19.530 ***** 2025-10-08 15:54:13.435552 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:54:13.435563 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:54:13.435574 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:54:13.435585 | orchestrator | 2025-10-08 15:54:13.435596 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2025-10-08 15:54:13.435607 | orchestrator | Wednesday 08 October 2025 15:51:37 +0000 (0:00:01.537) 0:00:21.067 ***** 2025-10-08 15:54:13.435617 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:54:13.435628 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:54:13.435639 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:54:13.435650 | orchestrator | 2025-10-08 15:54:13.435661 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2025-10-08 15:54:13.435672 | orchestrator | Wednesday 08 October 2025 15:51:38 +0000 (0:00:00.547) 0:00:21.614 ***** 2025-10-08 15:54:13.435689 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:54:13.435700 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:54:13.435711 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:54:13.435722 | orchestrator | 2025-10-08 15:54:13.435733 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2025-10-08 15:54:13.435743 | orchestrator | Wednesday 08 October 2025 15:51:38 +0000 (0:00:00.298) 0:00:21.913 ***** 2025-10-08 15:54:13.435754 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:54:13.435765 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:54:13.435776 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:54:13.435787 | orchestrator | 2025-10-08 15:54:13.435797 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2025-10-08 15:54:13.435808 | orchestrator | Wednesday 08 October 2025 15:51:39 +0000 (0:00:00.413) 0:00:22.327 ***** 2025-10-08 15:54:13.435824 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-10-08 15:54:13.435843 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-10-08 15:54:13.435855 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-10-08 15:54:13.435868 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-10-08 15:54:13.435894 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-10-08 15:54:13.435907 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-10-08 15:54:13.435926 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-10-08 15:54:13.435938 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-10-08 15:54:13.435949 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-10-08 15:54:13.435961 | orchestrator | 2025-10-08 15:54:13.435971 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-10-08 15:54:13.435983 | orchestrator | Wednesday 08 October 2025 15:51:41 +0000 (0:00:02.251) 0:00:24.578 ***** 2025-10-08 15:54:13.435994 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:54:13.436011 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:54:13.436022 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:54:13.436033 | orchestrator | 2025-10-08 15:54:13.436044 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2025-10-08 15:54:13.436054 | orchestrator | Wednesday 08 October 2025 15:51:41 +0000 (0:00:00.298) 0:00:24.876 ***** 2025-10-08 15:54:13.436065 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-10-08 15:54:13.436077 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-10-08 15:54:13.436088 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-10-08 15:54:13.436099 | orchestrator | 2025-10-08 15:54:13.436109 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2025-10-08 15:54:13.436120 | orchestrator | Wednesday 08 October 2025 15:51:43 +0000 (0:00:01.769) 0:00:26.646 ***** 2025-10-08 15:54:13.436131 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-10-08 15:54:13.436142 | orchestrator | 2025-10-08 15:54:13.436169 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2025-10-08 15:54:13.436181 | orchestrator | Wednesday 08 October 2025 15:51:44 +0000 (0:00:00.818) 0:00:27.465 ***** 2025-10-08 15:54:13.436192 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:54:13.436203 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:54:13.436213 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:54:13.436224 | orchestrator | 2025-10-08 15:54:13.436235 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2025-10-08 15:54:13.436246 | orchestrator | Wednesday 08 October 2025 15:51:45 +0000 (0:00:00.653) 0:00:28.118 ***** 2025-10-08 15:54:13.436257 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-10-08 15:54:13.436268 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-10-08 15:54:13.436279 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-10-08 15:54:13.436290 | orchestrator | 2025-10-08 15:54:13.436300 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2025-10-08 15:54:13.436316 | orchestrator | Wednesday 08 October 2025 15:51:45 +0000 (0:00:00.968) 0:00:29.087 ***** 2025-10-08 15:54:13.436327 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:54:13.436338 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:54:13.436349 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:54:13.436360 | orchestrator | 2025-10-08 15:54:13.436370 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2025-10-08 15:54:13.436381 | orchestrator | Wednesday 08 October 2025 15:51:46 +0000 (0:00:00.300) 0:00:29.388 ***** 2025-10-08 15:54:13.436392 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-10-08 15:54:13.436402 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-10-08 15:54:13.436413 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-10-08 15:54:13.436424 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-10-08 15:54:13.436435 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-10-08 15:54:13.436452 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-10-08 15:54:13.436464 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-10-08 15:54:13.436475 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-10-08 15:54:13.436486 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-10-08 15:54:13.436496 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-10-08 15:54:13.436507 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-10-08 15:54:13.436525 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-10-08 15:54:13.436536 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-10-08 15:54:13.436547 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-10-08 15:54:13.436557 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-10-08 15:54:13.436568 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-10-08 15:54:13.436579 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-10-08 15:54:13.436590 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-10-08 15:54:13.436601 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-10-08 15:54:13.436612 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-10-08 15:54:13.436622 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-10-08 15:54:13.436633 | orchestrator | 2025-10-08 15:54:13.436644 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2025-10-08 15:54:13.436655 | orchestrator | Wednesday 08 October 2025 15:51:54 +0000 (0:00:08.567) 0:00:37.955 ***** 2025-10-08 15:54:13.436665 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-10-08 15:54:13.436676 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-10-08 15:54:13.436687 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-10-08 15:54:13.436697 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-10-08 15:54:13.436708 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-10-08 15:54:13.436718 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-10-08 15:54:13.436729 | orchestrator | 2025-10-08 15:54:13.436740 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2025-10-08 15:54:13.436750 | orchestrator | Wednesday 08 October 2025 15:51:57 +0000 (0:00:02.706) 0:00:40.662 ***** 2025-10-08 15:54:13.436766 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-10-08 15:54:13.436787 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-10-08 15:54:13.436808 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-10-08 15:54:13.436820 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-10-08 15:54:13.436832 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-10-08 15:54:13.436843 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-10-08 15:54:13.436859 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-10-08 15:54:13.436884 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-10-08 15:54:13.436896 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-10-08 15:54:13.436908 | orchestrator | 2025-10-08 15:54:13.436919 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-10-08 15:54:13.436929 | orchestrator | Wednesday 08 October 2025 15:51:59 +0000 (0:00:02.220) 0:00:42.883 ***** 2025-10-08 15:54:13.436940 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:54:13.436951 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:54:13.436962 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:54:13.436973 | orchestrator | 2025-10-08 15:54:13.436984 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2025-10-08 15:54:13.436995 | orchestrator | Wednesday 08 October 2025 15:52:00 +0000 (0:00:00.249) 0:00:43.132 ***** 2025-10-08 15:54:13.437005 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:54:13.437016 | orchestrator | 2025-10-08 15:54:13.437027 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2025-10-08 15:54:13.437038 | orchestrator | Wednesday 08 October 2025 15:52:02 +0000 (0:00:02.331) 0:00:45.464 ***** 2025-10-08 15:54:13.437048 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:54:13.437059 | orchestrator | 2025-10-08 15:54:13.437070 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2025-10-08 15:54:13.437081 | orchestrator | Wednesday 08 October 2025 15:52:04 +0000 (0:00:02.344) 0:00:47.808 ***** 2025-10-08 15:54:13.437091 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:54:13.437102 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:54:13.437113 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:54:13.437124 | orchestrator | 2025-10-08 15:54:13.437134 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2025-10-08 15:54:13.437145 | orchestrator | Wednesday 08 October 2025 15:52:05 +0000 (0:00:00.917) 0:00:48.726 ***** 2025-10-08 15:54:13.437172 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:54:13.437183 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:54:13.437194 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:54:13.437205 | orchestrator | 2025-10-08 15:54:13.437216 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2025-10-08 15:54:13.437226 | orchestrator | Wednesday 08 October 2025 15:52:05 +0000 (0:00:00.307) 0:00:49.033 ***** 2025-10-08 15:54:13.437237 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:54:13.437248 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:54:13.437259 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:54:13.437269 | orchestrator | 2025-10-08 15:54:13.437280 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2025-10-08 15:54:13.437291 | orchestrator | Wednesday 08 October 2025 15:52:06 +0000 (0:00:00.575) 0:00:49.609 ***** 2025-10-08 15:54:13.437301 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:54:13.437312 | orchestrator | 2025-10-08 15:54:13.437329 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2025-10-08 15:54:13.437340 | orchestrator | Wednesday 08 October 2025 15:52:20 +0000 (0:00:14.465) 0:01:04.074 ***** 2025-10-08 15:54:13.437351 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:54:13.437362 | orchestrator | 2025-10-08 15:54:13.437373 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-10-08 15:54:13.437384 | orchestrator | Wednesday 08 October 2025 15:52:31 +0000 (0:00:10.906) 0:01:14.980 ***** 2025-10-08 15:54:13.437394 | orchestrator | 2025-10-08 15:54:13.437405 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-10-08 15:54:13.437416 | orchestrator | Wednesday 08 October 2025 15:52:31 +0000 (0:00:00.067) 0:01:15.048 ***** 2025-10-08 15:54:13.437426 | orchestrator | 2025-10-08 15:54:13.437442 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-10-08 15:54:13.437453 | orchestrator | Wednesday 08 October 2025 15:52:32 +0000 (0:00:00.066) 0:01:15.114 ***** 2025-10-08 15:54:13.437463 | orchestrator | 2025-10-08 15:54:13.437474 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2025-10-08 15:54:13.437485 | orchestrator | Wednesday 08 October 2025 15:52:32 +0000 (0:00:00.064) 0:01:15.179 ***** 2025-10-08 15:54:13.437496 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:54:13.437507 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:54:13.437517 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:54:13.437528 | orchestrator | 2025-10-08 15:54:13.437538 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2025-10-08 15:54:13.437550 | orchestrator | Wednesday 08 October 2025 15:52:56 +0000 (0:00:24.190) 0:01:39.369 ***** 2025-10-08 15:54:13.437560 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:54:13.437571 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:54:13.437582 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:54:13.437592 | orchestrator | 2025-10-08 15:54:13.437603 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2025-10-08 15:54:13.437614 | orchestrator | Wednesday 08 October 2025 15:53:06 +0000 (0:00:10.158) 0:01:49.528 ***** 2025-10-08 15:54:13.437625 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:54:13.437636 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:54:13.437652 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:54:13.437664 | orchestrator | 2025-10-08 15:54:13.437675 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-10-08 15:54:13.437686 | orchestrator | Wednesday 08 October 2025 15:53:18 +0000 (0:00:12.332) 0:02:01.861 ***** 2025-10-08 15:54:13.437696 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-08 15:54:13.437707 | orchestrator | 2025-10-08 15:54:13.437718 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2025-10-08 15:54:13.437729 | orchestrator | Wednesday 08 October 2025 15:53:19 +0000 (0:00:00.768) 0:02:02.629 ***** 2025-10-08 15:54:13.437739 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:54:13.437750 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:54:13.437761 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:54:13.437771 | orchestrator | 2025-10-08 15:54:13.437782 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2025-10-08 15:54:13.437793 | orchestrator | Wednesday 08 October 2025 15:53:20 +0000 (0:00:00.785) 0:02:03.415 ***** 2025-10-08 15:54:13.437803 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:54:13.437814 | orchestrator | 2025-10-08 15:54:13.437825 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2025-10-08 15:54:13.437836 | orchestrator | Wednesday 08 October 2025 15:53:22 +0000 (0:00:01.807) 0:02:05.222 ***** 2025-10-08 15:54:13.437846 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2025-10-08 15:54:13.437857 | orchestrator | 2025-10-08 15:54:13.437868 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2025-10-08 15:54:13.437879 | orchestrator | Wednesday 08 October 2025 15:53:33 +0000 (0:00:11.431) 0:02:16.654 ***** 2025-10-08 15:54:13.437895 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2025-10-08 15:54:13.437906 | orchestrator | 2025-10-08 15:54:13.437917 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2025-10-08 15:54:13.437928 | orchestrator | Wednesday 08 October 2025 15:53:59 +0000 (0:00:25.761) 0:02:42.415 ***** 2025-10-08 15:54:13.437938 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2025-10-08 15:54:13.437950 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2025-10-08 15:54:13.437960 | orchestrator | 2025-10-08 15:54:13.437971 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2025-10-08 15:54:13.437982 | orchestrator | Wednesday 08 October 2025 15:54:05 +0000 (0:00:06.156) 0:02:48.571 ***** 2025-10-08 15:54:13.437993 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:54:13.438004 | orchestrator | 2025-10-08 15:54:13.438014 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2025-10-08 15:54:13.438055 | orchestrator | Wednesday 08 October 2025 15:54:05 +0000 (0:00:00.173) 0:02:48.745 ***** 2025-10-08 15:54:13.438066 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:54:13.438077 | orchestrator | 2025-10-08 15:54:13.438087 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2025-10-08 15:54:13.438098 | orchestrator | Wednesday 08 October 2025 15:54:05 +0000 (0:00:00.173) 0:02:48.919 ***** 2025-10-08 15:54:13.438109 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:54:13.438120 | orchestrator | 2025-10-08 15:54:13.438131 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2025-10-08 15:54:13.438142 | orchestrator | Wednesday 08 October 2025 15:54:05 +0000 (0:00:00.119) 0:02:49.038 ***** 2025-10-08 15:54:13.438169 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:54:13.438180 | orchestrator | 2025-10-08 15:54:13.438191 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2025-10-08 15:54:13.438202 | orchestrator | Wednesday 08 October 2025 15:54:06 +0000 (0:00:00.692) 0:02:49.731 ***** 2025-10-08 15:54:13.438212 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:54:13.438223 | orchestrator | 2025-10-08 15:54:13.438234 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-10-08 15:54:13.438245 | orchestrator | Wednesday 08 October 2025 15:54:09 +0000 (0:00:03.018) 0:02:52.749 ***** 2025-10-08 15:54:13.438256 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:54:13.438267 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:54:13.438277 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:54:13.438288 | orchestrator | 2025-10-08 15:54:13.438299 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-08 15:54:13.438310 | orchestrator | testbed-node-0 : ok=36  changed=20  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-10-08 15:54:13.438326 | orchestrator | testbed-node-1 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-10-08 15:54:13.438337 | orchestrator | testbed-node-2 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-10-08 15:54:13.438348 | orchestrator | 2025-10-08 15:54:13.438359 | orchestrator | 2025-10-08 15:54:13.438370 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-08 15:54:13.438380 | orchestrator | Wednesday 08 October 2025 15:54:10 +0000 (0:00:00.462) 0:02:53.212 ***** 2025-10-08 15:54:13.438391 | orchestrator | =============================================================================== 2025-10-08 15:54:13.438402 | orchestrator | service-ks-register : keystone | Creating services --------------------- 25.76s 2025-10-08 15:54:13.438413 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 24.19s 2025-10-08 15:54:13.438423 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 14.47s 2025-10-08 15:54:13.438443 | orchestrator | keystone : Restart keystone container ---------------------------------- 12.33s 2025-10-08 15:54:13.438453 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 11.43s 2025-10-08 15:54:13.438471 | orchestrator | keystone : Running Keystone fernet bootstrap container ----------------- 10.91s 2025-10-08 15:54:13.438482 | orchestrator | keystone : Restart keystone-fernet container --------------------------- 10.16s 2025-10-08 15:54:13.438492 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 8.57s 2025-10-08 15:54:13.438503 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 6.16s 2025-10-08 15:54:13.438514 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 5.45s 2025-10-08 15:54:13.438524 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.41s 2025-10-08 15:54:13.438535 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.23s 2025-10-08 15:54:13.438546 | orchestrator | keystone : Creating default user role ----------------------------------- 3.02s 2025-10-08 15:54:13.438556 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.71s 2025-10-08 15:54:13.438567 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.34s 2025-10-08 15:54:13.438577 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.33s 2025-10-08 15:54:13.438588 | orchestrator | keystone : Copying over existing policy file ---------------------------- 2.25s 2025-10-08 15:54:13.438599 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.22s 2025-10-08 15:54:13.438609 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.81s 2025-10-08 15:54:13.438620 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 1.79s 2025-10-08 15:54:13.438631 | orchestrator | 2025-10-08 15:54:13 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:54:16.685282 | orchestrator | 2025-10-08 15:54:16 | INFO  | Task af272394-32d4-40a7-b33d-073343bea7a9 is in state STARTED 2025-10-08 15:54:16.685342 | orchestrator | 2025-10-08 15:54:16 | INFO  | Task 9b68a156-a003-4645-b7a2-40c5902f991a is in state STARTED 2025-10-08 15:54:16.685352 | orchestrator | 2025-10-08 15:54:16 | INFO  | Task 756195b4-1aff-413c-bbe4-c2a24a419f05 is in state STARTED 2025-10-08 15:54:16.685361 | orchestrator | 2025-10-08 15:54:16 | INFO  | Task 6fb12360-74dd-4fbc-a797-e3482d23c8e6 is in state STARTED 2025-10-08 15:54:16.685369 | orchestrator | 2025-10-08 15:54:16 | INFO  | Task 6b0e5e14-3054-485c-a6fb-0dfeb7c571ff is in state STARTED 2025-10-08 15:54:16.685377 | orchestrator | 2025-10-08 15:54:16 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:54:19.511917 | orchestrator | 2025-10-08 15:54:19 | INFO  | Task af272394-32d4-40a7-b33d-073343bea7a9 is in state STARTED 2025-10-08 15:54:19.512023 | orchestrator | 2025-10-08 15:54:19 | INFO  | Task 9b68a156-a003-4645-b7a2-40c5902f991a is in state STARTED 2025-10-08 15:54:19.512463 | orchestrator | 2025-10-08 15:54:19 | INFO  | Task 756195b4-1aff-413c-bbe4-c2a24a419f05 is in state STARTED 2025-10-08 15:54:19.513039 | orchestrator | 2025-10-08 15:54:19 | INFO  | Task 6fb12360-74dd-4fbc-a797-e3482d23c8e6 is in state STARTED 2025-10-08 15:54:19.513801 | orchestrator | 2025-10-08 15:54:19 | INFO  | Task 6b0e5e14-3054-485c-a6fb-0dfeb7c571ff is in state STARTED 2025-10-08 15:54:19.513823 | orchestrator | 2025-10-08 15:54:19 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:54:22.538524 | orchestrator | 2025-10-08 15:54:22 | INFO  | Task af272394-32d4-40a7-b33d-073343bea7a9 is in state STARTED 2025-10-08 15:54:22.538619 | orchestrator | 2025-10-08 15:54:22 | INFO  | Task 9b68a156-a003-4645-b7a2-40c5902f991a is in state STARTED 2025-10-08 15:54:22.539191 | orchestrator | 2025-10-08 15:54:22 | INFO  | Task 756195b4-1aff-413c-bbe4-c2a24a419f05 is in state STARTED 2025-10-08 15:54:22.539855 | orchestrator | 2025-10-08 15:54:22 | INFO  | Task 6fb12360-74dd-4fbc-a797-e3482d23c8e6 is in state STARTED 2025-10-08 15:54:22.541321 | orchestrator | 2025-10-08 15:54:22 | INFO  | Task 6b0e5e14-3054-485c-a6fb-0dfeb7c571ff is in state STARTED 2025-10-08 15:54:22.541345 | orchestrator | 2025-10-08 15:54:22 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:54:25.565994 | orchestrator | 2025-10-08 15:54:25 | INFO  | Task af272394-32d4-40a7-b33d-073343bea7a9 is in state STARTED 2025-10-08 15:54:25.566175 | orchestrator | 2025-10-08 15:54:25 | INFO  | Task 9b68a156-a003-4645-b7a2-40c5902f991a is in state STARTED 2025-10-08 15:54:25.569612 | orchestrator | 2025-10-08 15:54:25 | INFO  | Task 756195b4-1aff-413c-bbe4-c2a24a419f05 is in state STARTED 2025-10-08 15:54:25.570210 | orchestrator | 2025-10-08 15:54:25 | INFO  | Task 6fb12360-74dd-4fbc-a797-e3482d23c8e6 is in state STARTED 2025-10-08 15:54:25.570739 | orchestrator | 2025-10-08 15:54:25 | INFO  | Task 6b0e5e14-3054-485c-a6fb-0dfeb7c571ff is in state STARTED 2025-10-08 15:54:25.570836 | orchestrator | 2025-10-08 15:54:25 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:54:28.606633 | orchestrator | 2025-10-08 15:54:28 | INFO  | Task af272394-32d4-40a7-b33d-073343bea7a9 is in state STARTED 2025-10-08 15:54:28.608907 | orchestrator | 2025-10-08 15:54:28 | INFO  | Task 9b68a156-a003-4645-b7a2-40c5902f991a is in state STARTED 2025-10-08 15:54:28.613341 | orchestrator | 2025-10-08 15:54:28 | INFO  | Task 756195b4-1aff-413c-bbe4-c2a24a419f05 is in state STARTED 2025-10-08 15:54:28.616291 | orchestrator | 2025-10-08 15:54:28 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 15:54:28.617808 | orchestrator | 2025-10-08 15:54:28 | INFO  | Task 6fb12360-74dd-4fbc-a797-e3482d23c8e6 is in state SUCCESS 2025-10-08 15:54:28.619456 | orchestrator | 2025-10-08 15:54:28 | INFO  | Task 6b0e5e14-3054-485c-a6fb-0dfeb7c571ff is in state STARTED 2025-10-08 15:54:28.619487 | orchestrator | 2025-10-08 15:54:28 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:54:31.651080 | orchestrator | 2025-10-08 15:54:31 | INFO  | Task af272394-32d4-40a7-b33d-073343bea7a9 is in state STARTED 2025-10-08 15:54:31.651242 | orchestrator | 2025-10-08 15:54:31 | INFO  | Task 9b68a156-a003-4645-b7a2-40c5902f991a is in state STARTED 2025-10-08 15:54:31.651794 | orchestrator | 2025-10-08 15:54:31 | INFO  | Task 756195b4-1aff-413c-bbe4-c2a24a419f05 is in state STARTED 2025-10-08 15:54:31.652268 | orchestrator | 2025-10-08 15:54:31 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 15:54:31.652894 | orchestrator | 2025-10-08 15:54:31 | INFO  | Task 6b0e5e14-3054-485c-a6fb-0dfeb7c571ff is in state STARTED 2025-10-08 15:54:31.652915 | orchestrator | 2025-10-08 15:54:31 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:54:34.692532 | orchestrator | 2025-10-08 15:54:34 | INFO  | Task af272394-32d4-40a7-b33d-073343bea7a9 is in state STARTED 2025-10-08 15:54:34.694759 | orchestrator | 2025-10-08 15:54:34 | INFO  | Task 9b68a156-a003-4645-b7a2-40c5902f991a is in state STARTED 2025-10-08 15:54:34.696006 | orchestrator | 2025-10-08 15:54:34 | INFO  | Task 756195b4-1aff-413c-bbe4-c2a24a419f05 is in state STARTED 2025-10-08 15:54:34.697288 | orchestrator | 2025-10-08 15:54:34 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 15:54:34.699272 | orchestrator | 2025-10-08 15:54:34 | INFO  | Task 6b0e5e14-3054-485c-a6fb-0dfeb7c571ff is in state STARTED 2025-10-08 15:54:34.699309 | orchestrator | 2025-10-08 15:54:34 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:54:37.735325 | orchestrator | 2025-10-08 15:54:37 | INFO  | Task af272394-32d4-40a7-b33d-073343bea7a9 is in state STARTED 2025-10-08 15:54:37.735425 | orchestrator | 2025-10-08 15:54:37 | INFO  | Task 9b68a156-a003-4645-b7a2-40c5902f991a is in state STARTED 2025-10-08 15:54:37.735440 | orchestrator | 2025-10-08 15:54:37 | INFO  | Task 756195b4-1aff-413c-bbe4-c2a24a419f05 is in state STARTED 2025-10-08 15:54:37.737902 | orchestrator | 2025-10-08 15:54:37 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 15:54:37.737939 | orchestrator | 2025-10-08 15:54:37 | INFO  | Task 6b0e5e14-3054-485c-a6fb-0dfeb7c571ff is in state STARTED 2025-10-08 15:54:37.737955 | orchestrator | 2025-10-08 15:54:37 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:54:40.766675 | orchestrator | 2025-10-08 15:54:40 | INFO  | Task af272394-32d4-40a7-b33d-073343bea7a9 is in state STARTED 2025-10-08 15:54:40.768352 | orchestrator | 2025-10-08 15:54:40 | INFO  | Task 9b68a156-a003-4645-b7a2-40c5902f991a is in state STARTED 2025-10-08 15:54:40.770943 | orchestrator | 2025-10-08 15:54:40 | INFO  | Task 756195b4-1aff-413c-bbe4-c2a24a419f05 is in state STARTED 2025-10-08 15:54:40.773391 | orchestrator | 2025-10-08 15:54:40 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 15:54:40.775309 | orchestrator | 2025-10-08 15:54:40 | INFO  | Task 6b0e5e14-3054-485c-a6fb-0dfeb7c571ff is in state STARTED 2025-10-08 15:54:40.775545 | orchestrator | 2025-10-08 15:54:40 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:54:43.805183 | orchestrator | 2025-10-08 15:54:43 | INFO  | Task af272394-32d4-40a7-b33d-073343bea7a9 is in state STARTED 2025-10-08 15:54:43.805285 | orchestrator | 2025-10-08 15:54:43 | INFO  | Task 9b68a156-a003-4645-b7a2-40c5902f991a is in state STARTED 2025-10-08 15:54:43.805717 | orchestrator | 2025-10-08 15:54:43 | INFO  | Task 756195b4-1aff-413c-bbe4-c2a24a419f05 is in state STARTED 2025-10-08 15:54:43.806343 | orchestrator | 2025-10-08 15:54:43 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 15:54:43.806947 | orchestrator | 2025-10-08 15:54:43 | INFO  | Task 6b0e5e14-3054-485c-a6fb-0dfeb7c571ff is in state STARTED 2025-10-08 15:54:43.806969 | orchestrator | 2025-10-08 15:54:43 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:54:46.843654 | orchestrator | 2025-10-08 15:54:46 | INFO  | Task af272394-32d4-40a7-b33d-073343bea7a9 is in state STARTED 2025-10-08 15:54:46.843743 | orchestrator | 2025-10-08 15:54:46 | INFO  | Task 9b68a156-a003-4645-b7a2-40c5902f991a is in state STARTED 2025-10-08 15:54:46.843754 | orchestrator | 2025-10-08 15:54:46 | INFO  | Task 756195b4-1aff-413c-bbe4-c2a24a419f05 is in state STARTED 2025-10-08 15:54:46.843762 | orchestrator | 2025-10-08 15:54:46 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 15:54:46.843769 | orchestrator | 2025-10-08 15:54:46 | INFO  | Task 6b0e5e14-3054-485c-a6fb-0dfeb7c571ff is in state STARTED 2025-10-08 15:54:46.843777 | orchestrator | 2025-10-08 15:54:46 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:54:49.851015 | orchestrator | 2025-10-08 15:54:49 | INFO  | Task af272394-32d4-40a7-b33d-073343bea7a9 is in state STARTED 2025-10-08 15:54:49.851258 | orchestrator | 2025-10-08 15:54:49 | INFO  | Task 9b68a156-a003-4645-b7a2-40c5902f991a is in state STARTED 2025-10-08 15:54:49.852354 | orchestrator | 2025-10-08 15:54:49 | INFO  | Task 756195b4-1aff-413c-bbe4-c2a24a419f05 is in state STARTED 2025-10-08 15:54:49.853915 | orchestrator | 2025-10-08 15:54:49 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 15:54:49.854541 | orchestrator | 2025-10-08 15:54:49 | INFO  | Task 6b0e5e14-3054-485c-a6fb-0dfeb7c571ff is in state STARTED 2025-10-08 15:54:49.854568 | orchestrator | 2025-10-08 15:54:49 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:54:52.883066 | orchestrator | 2025-10-08 15:54:52 | INFO  | Task af272394-32d4-40a7-b33d-073343bea7a9 is in state STARTED 2025-10-08 15:54:52.883319 | orchestrator | 2025-10-08 15:54:52 | INFO  | Task 9b68a156-a003-4645-b7a2-40c5902f991a is in state STARTED 2025-10-08 15:54:52.883679 | orchestrator | 2025-10-08 15:54:52 | INFO  | Task 756195b4-1aff-413c-bbe4-c2a24a419f05 is in state STARTED 2025-10-08 15:54:52.884248 | orchestrator | 2025-10-08 15:54:52 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 15:54:52.887964 | orchestrator | 2025-10-08 15:54:52 | INFO  | Task 6b0e5e14-3054-485c-a6fb-0dfeb7c571ff is in state STARTED 2025-10-08 15:54:52.887989 | orchestrator | 2025-10-08 15:54:52 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:54:55.913608 | orchestrator | 2025-10-08 15:54:55 | INFO  | Task af272394-32d4-40a7-b33d-073343bea7a9 is in state STARTED 2025-10-08 15:54:55.913717 | orchestrator | 2025-10-08 15:54:55 | INFO  | Task 9b68a156-a003-4645-b7a2-40c5902f991a is in state STARTED 2025-10-08 15:54:55.914273 | orchestrator | 2025-10-08 15:54:55 | INFO  | Task 756195b4-1aff-413c-bbe4-c2a24a419f05 is in state STARTED 2025-10-08 15:54:55.916412 | orchestrator | 2025-10-08 15:54:55 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 15:54:55.916911 | orchestrator | 2025-10-08 15:54:55 | INFO  | Task 6b0e5e14-3054-485c-a6fb-0dfeb7c571ff is in state STARTED 2025-10-08 15:54:55.916934 | orchestrator | 2025-10-08 15:54:55 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:54:58.943600 | orchestrator | 2025-10-08 15:54:58 | INFO  | Task af272394-32d4-40a7-b33d-073343bea7a9 is in state STARTED 2025-10-08 15:54:58.943852 | orchestrator | 2025-10-08 15:54:58 | INFO  | Task 9b68a156-a003-4645-b7a2-40c5902f991a is in state STARTED 2025-10-08 15:54:58.944589 | orchestrator | 2025-10-08 15:54:58 | INFO  | Task 756195b4-1aff-413c-bbe4-c2a24a419f05 is in state STARTED 2025-10-08 15:54:58.945292 | orchestrator | 2025-10-08 15:54:58 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 15:54:58.945950 | orchestrator | 2025-10-08 15:54:58 | INFO  | Task 6b0e5e14-3054-485c-a6fb-0dfeb7c571ff is in state STARTED 2025-10-08 15:54:58.945975 | orchestrator | 2025-10-08 15:54:58 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:55:01.969243 | orchestrator | 2025-10-08 15:55:01 | INFO  | Task af272394-32d4-40a7-b33d-073343bea7a9 is in state STARTED 2025-10-08 15:55:01.969360 | orchestrator | 2025-10-08 15:55:01 | INFO  | Task 9b68a156-a003-4645-b7a2-40c5902f991a is in state STARTED 2025-10-08 15:55:01.969840 | orchestrator | 2025-10-08 15:55:01 | INFO  | Task 756195b4-1aff-413c-bbe4-c2a24a419f05 is in state STARTED 2025-10-08 15:55:01.970638 | orchestrator | 2025-10-08 15:55:01 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 15:55:01.971291 | orchestrator | 2025-10-08 15:55:01 | INFO  | Task 6b0e5e14-3054-485c-a6fb-0dfeb7c571ff is in state STARTED 2025-10-08 15:55:01.971314 | orchestrator | 2025-10-08 15:55:01 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:55:04.993318 | orchestrator | 2025-10-08 15:55:04 | INFO  | Task af272394-32d4-40a7-b33d-073343bea7a9 is in state STARTED 2025-10-08 15:55:04.993635 | orchestrator | 2025-10-08 15:55:04 | INFO  | Task 9b68a156-a003-4645-b7a2-40c5902f991a is in state STARTED 2025-10-08 15:55:04.994259 | orchestrator | 2025-10-08 15:55:04 | INFO  | Task 756195b4-1aff-413c-bbe4-c2a24a419f05 is in state STARTED 2025-10-08 15:55:04.994758 | orchestrator | 2025-10-08 15:55:04 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 15:55:04.995419 | orchestrator | 2025-10-08 15:55:04 | INFO  | Task 6b0e5e14-3054-485c-a6fb-0dfeb7c571ff is in state STARTED 2025-10-08 15:55:04.995439 | orchestrator | 2025-10-08 15:55:04 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:55:08.027531 | orchestrator | 2025-10-08 15:55:08 | INFO  | Task af272394-32d4-40a7-b33d-073343bea7a9 is in state STARTED 2025-10-08 15:55:08.027708 | orchestrator | 2025-10-08 15:55:08 | INFO  | Task 9b68a156-a003-4645-b7a2-40c5902f991a is in state STARTED 2025-10-08 15:55:08.028334 | orchestrator | 2025-10-08 15:55:08 | INFO  | Task 756195b4-1aff-413c-bbe4-c2a24a419f05 is in state STARTED 2025-10-08 15:55:08.028982 | orchestrator | 2025-10-08 15:55:08 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 15:55:08.029522 | orchestrator | 2025-10-08 15:55:08 | INFO  | Task 6b0e5e14-3054-485c-a6fb-0dfeb7c571ff is in state SUCCESS 2025-10-08 15:55:08.029953 | orchestrator | 2025-10-08 15:55:08.029977 | orchestrator | 2025-10-08 15:55:08.029988 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-10-08 15:55:08.029998 | orchestrator | 2025-10-08 15:55:08.030009 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-10-08 15:55:08.030059 | orchestrator | Wednesday 08 October 2025 15:53:52 +0000 (0:00:00.336) 0:00:00.336 ***** 2025-10-08 15:55:08.030072 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:55:08.030082 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:55:08.030092 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:55:08.030102 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:55:08.030111 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:55:08.030157 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:55:08.030167 | orchestrator | ok: [testbed-manager] 2025-10-08 15:55:08.030177 | orchestrator | 2025-10-08 15:55:08.030187 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-10-08 15:55:08.030197 | orchestrator | Wednesday 08 October 2025 15:53:52 +0000 (0:00:00.736) 0:00:01.072 ***** 2025-10-08 15:55:08.030207 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2025-10-08 15:55:08.030217 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2025-10-08 15:55:08.030227 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2025-10-08 15:55:08.030237 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2025-10-08 15:55:08.030246 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2025-10-08 15:55:08.030256 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2025-10-08 15:55:08.030266 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2025-10-08 15:55:08.030276 | orchestrator | 2025-10-08 15:55:08.030286 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-10-08 15:55:08.030296 | orchestrator | 2025-10-08 15:55:08.030305 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2025-10-08 15:55:08.030315 | orchestrator | Wednesday 08 October 2025 15:53:53 +0000 (0:00:00.947) 0:00:02.020 ***** 2025-10-08 15:55:08.030325 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-10-08 15:55:08.030336 | orchestrator | 2025-10-08 15:55:08.030346 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2025-10-08 15:55:08.030356 | orchestrator | Wednesday 08 October 2025 15:53:56 +0000 (0:00:02.890) 0:00:04.911 ***** 2025-10-08 15:55:08.030377 | orchestrator | changed: [testbed-node-0] => (item=swift (object-store)) 2025-10-08 15:55:08.030388 | orchestrator | 2025-10-08 15:55:08.030397 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2025-10-08 15:55:08.030424 | orchestrator | Wednesday 08 October 2025 15:54:01 +0000 (0:00:04.628) 0:00:09.539 ***** 2025-10-08 15:55:08.030435 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2025-10-08 15:55:08.030446 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2025-10-08 15:55:08.030456 | orchestrator | 2025-10-08 15:55:08.030465 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2025-10-08 15:55:08.030475 | orchestrator | Wednesday 08 October 2025 15:54:07 +0000 (0:00:06.391) 0:00:15.930 ***** 2025-10-08 15:55:08.030485 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-10-08 15:55:08.030494 | orchestrator | 2025-10-08 15:55:08.030504 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2025-10-08 15:55:08.030513 | orchestrator | Wednesday 08 October 2025 15:54:10 +0000 (0:00:03.057) 0:00:18.988 ***** 2025-10-08 15:55:08.030523 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-10-08 15:55:08.030532 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service) 2025-10-08 15:55:08.030542 | orchestrator | 2025-10-08 15:55:08.030551 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2025-10-08 15:55:08.030561 | orchestrator | Wednesday 08 October 2025 15:54:14 +0000 (0:00:03.640) 0:00:22.628 ***** 2025-10-08 15:55:08.030573 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-10-08 15:55:08.030585 | orchestrator | changed: [testbed-node-0] => (item=ResellerAdmin) 2025-10-08 15:55:08.030596 | orchestrator | 2025-10-08 15:55:08.030607 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2025-10-08 15:55:08.030618 | orchestrator | Wednesday 08 October 2025 15:54:20 +0000 (0:00:05.848) 0:00:28.477 ***** 2025-10-08 15:55:08.030629 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service -> admin) 2025-10-08 15:55:08.030640 | orchestrator | 2025-10-08 15:55:08.030651 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-08 15:55:08.030663 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-08 15:55:08.030675 | orchestrator | testbed-node-0 : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-08 15:55:08.030687 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-08 15:55:08.030698 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-08 15:55:08.030709 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-08 15:55:08.030731 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-08 15:55:08.030743 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-08 15:55:08.030754 | orchestrator | 2025-10-08 15:55:08.030765 | orchestrator | 2025-10-08 15:55:08.030777 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-08 15:55:08.030788 | orchestrator | Wednesday 08 October 2025 15:54:26 +0000 (0:00:06.257) 0:00:34.734 ***** 2025-10-08 15:55:08.030799 | orchestrator | =============================================================================== 2025-10-08 15:55:08.030810 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 6.39s 2025-10-08 15:55:08.030821 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 6.26s 2025-10-08 15:55:08.030832 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 5.85s 2025-10-08 15:55:08.030843 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 4.63s 2025-10-08 15:55:08.030862 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 3.64s 2025-10-08 15:55:08.030874 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 3.06s 2025-10-08 15:55:08.030884 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 2.89s 2025-10-08 15:55:08.030895 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.95s 2025-10-08 15:55:08.030906 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.74s 2025-10-08 15:55:08.030917 | orchestrator | 2025-10-08 15:55:08.030927 | orchestrator | 2025-10-08 15:55:08.030936 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2025-10-08 15:55:08.030946 | orchestrator | 2025-10-08 15:55:08.030955 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2025-10-08 15:55:08.030965 | orchestrator | Wednesday 08 October 2025 15:53:44 +0000 (0:00:00.237) 0:00:00.237 ***** 2025-10-08 15:55:08.030975 | orchestrator | changed: [testbed-manager] 2025-10-08 15:55:08.030984 | orchestrator | 2025-10-08 15:55:08.030994 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2025-10-08 15:55:08.031004 | orchestrator | Wednesday 08 October 2025 15:53:46 +0000 (0:00:01.162) 0:00:01.399 ***** 2025-10-08 15:55:08.031013 | orchestrator | changed: [testbed-manager] 2025-10-08 15:55:08.031023 | orchestrator | 2025-10-08 15:55:08.031037 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2025-10-08 15:55:08.031047 | orchestrator | Wednesday 08 October 2025 15:53:46 +0000 (0:00:00.944) 0:00:02.344 ***** 2025-10-08 15:55:08.031056 | orchestrator | changed: [testbed-manager] 2025-10-08 15:55:08.031066 | orchestrator | 2025-10-08 15:55:08.031075 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2025-10-08 15:55:08.031085 | orchestrator | Wednesday 08 October 2025 15:53:47 +0000 (0:00:00.960) 0:00:03.305 ***** 2025-10-08 15:55:08.031095 | orchestrator | changed: [testbed-manager] 2025-10-08 15:55:08.031104 | orchestrator | 2025-10-08 15:55:08.031126 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2025-10-08 15:55:08.031137 | orchestrator | Wednesday 08 October 2025 15:53:49 +0000 (0:00:01.303) 0:00:04.608 ***** 2025-10-08 15:55:08.031146 | orchestrator | changed: [testbed-manager] 2025-10-08 15:55:08.031156 | orchestrator | 2025-10-08 15:55:08.031165 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2025-10-08 15:55:08.031175 | orchestrator | Wednesday 08 October 2025 15:53:50 +0000 (0:00:01.121) 0:00:05.729 ***** 2025-10-08 15:55:08.031184 | orchestrator | changed: [testbed-manager] 2025-10-08 15:55:08.031194 | orchestrator | 2025-10-08 15:55:08.031203 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2025-10-08 15:55:08.031213 | orchestrator | Wednesday 08 October 2025 15:53:51 +0000 (0:00:00.985) 0:00:06.715 ***** 2025-10-08 15:55:08.031223 | orchestrator | changed: [testbed-manager] 2025-10-08 15:55:08.031232 | orchestrator | 2025-10-08 15:55:08.031242 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2025-10-08 15:55:08.031251 | orchestrator | Wednesday 08 October 2025 15:53:53 +0000 (0:00:02.214) 0:00:08.930 ***** 2025-10-08 15:55:08.031261 | orchestrator | changed: [testbed-manager] 2025-10-08 15:55:08.031270 | orchestrator | 2025-10-08 15:55:08.031280 | orchestrator | TASK [Create admin user] ******************************************************* 2025-10-08 15:55:08.031290 | orchestrator | Wednesday 08 October 2025 15:53:54 +0000 (0:00:01.215) 0:00:10.146 ***** 2025-10-08 15:55:08.031299 | orchestrator | changed: [testbed-manager] 2025-10-08 15:55:08.031309 | orchestrator | 2025-10-08 15:55:08.031318 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2025-10-08 15:55:08.031328 | orchestrator | Wednesday 08 October 2025 15:54:41 +0000 (0:00:46.987) 0:00:57.133 ***** 2025-10-08 15:55:08.031337 | orchestrator | skipping: [testbed-manager] 2025-10-08 15:55:08.031347 | orchestrator | 2025-10-08 15:55:08.031356 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-10-08 15:55:08.031366 | orchestrator | 2025-10-08 15:55:08.031383 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-10-08 15:55:08.031393 | orchestrator | Wednesday 08 October 2025 15:54:41 +0000 (0:00:00.150) 0:00:57.283 ***** 2025-10-08 15:55:08.031403 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:55:08.031412 | orchestrator | 2025-10-08 15:55:08.031422 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-10-08 15:55:08.031431 | orchestrator | 2025-10-08 15:55:08.031441 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-10-08 15:55:08.031450 | orchestrator | Wednesday 08 October 2025 15:54:43 +0000 (0:00:01.633) 0:00:58.916 ***** 2025-10-08 15:55:08.031460 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:55:08.031469 | orchestrator | 2025-10-08 15:55:08.031479 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-10-08 15:55:08.031489 | orchestrator | 2025-10-08 15:55:08.031498 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-10-08 15:55:08.031508 | orchestrator | Wednesday 08 October 2025 15:54:54 +0000 (0:00:11.226) 0:01:10.143 ***** 2025-10-08 15:55:08.031518 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:55:08.031527 | orchestrator | 2025-10-08 15:55:08.031542 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-08 15:55:08.031552 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-10-08 15:55:08.031562 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-08 15:55:08.031572 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-08 15:55:08.031582 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-08 15:55:08.031591 | orchestrator | 2025-10-08 15:55:08.031601 | orchestrator | 2025-10-08 15:55:08.031610 | orchestrator | 2025-10-08 15:55:08.031620 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-08 15:55:08.031630 | orchestrator | Wednesday 08 October 2025 15:55:05 +0000 (0:00:11.178) 0:01:21.321 ***** 2025-10-08 15:55:08.031639 | orchestrator | =============================================================================== 2025-10-08 15:55:08.031649 | orchestrator | Create admin user ------------------------------------------------------ 46.99s 2025-10-08 15:55:08.031658 | orchestrator | Restart ceph manager service ------------------------------------------- 24.04s 2025-10-08 15:55:08.031668 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 2.21s 2025-10-08 15:55:08.031677 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.30s 2025-10-08 15:55:08.031687 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.22s 2025-10-08 15:55:08.031697 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.16s 2025-10-08 15:55:08.031706 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.12s 2025-10-08 15:55:08.031716 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 0.99s 2025-10-08 15:55:08.031725 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 0.96s 2025-10-08 15:55:08.031739 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 0.94s 2025-10-08 15:55:08.031749 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.15s 2025-10-08 15:55:08.031759 | orchestrator | 2025-10-08 15:55:08 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:55:11.072093 | orchestrator | 2025-10-08 15:55:11 | INFO  | Task af272394-32d4-40a7-b33d-073343bea7a9 is in state STARTED 2025-10-08 15:55:11.072222 | orchestrator | 2025-10-08 15:55:11 | INFO  | Task 9b68a156-a003-4645-b7a2-40c5902f991a is in state STARTED 2025-10-08 15:55:11.072260 | orchestrator | 2025-10-08 15:55:11 | INFO  | Task 756195b4-1aff-413c-bbe4-c2a24a419f05 is in state STARTED 2025-10-08 15:55:11.072272 | orchestrator | 2025-10-08 15:55:11 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 15:55:11.072283 | orchestrator | 2025-10-08 15:55:11 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:55:14.076810 | orchestrator | 2025-10-08 15:55:14 | INFO  | Task af272394-32d4-40a7-b33d-073343bea7a9 is in state STARTED 2025-10-08 15:55:14.078490 | orchestrator | 2025-10-08 15:55:14 | INFO  | Task 9b68a156-a003-4645-b7a2-40c5902f991a is in state STARTED 2025-10-08 15:55:14.081748 | orchestrator | 2025-10-08 15:55:14 | INFO  | Task 756195b4-1aff-413c-bbe4-c2a24a419f05 is in state STARTED 2025-10-08 15:55:14.082300 | orchestrator | 2025-10-08 15:55:14 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 15:55:14.082377 | orchestrator | 2025-10-08 15:55:14 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:55:17.106229 | orchestrator | 2025-10-08 15:55:17 | INFO  | Task af272394-32d4-40a7-b33d-073343bea7a9 is in state STARTED 2025-10-08 15:55:17.165062 | orchestrator | 2025-10-08 15:55:17 | INFO  | Task 9b68a156-a003-4645-b7a2-40c5902f991a is in state STARTED 2025-10-08 15:55:17.165111 | orchestrator | 2025-10-08 15:55:17 | INFO  | Task 756195b4-1aff-413c-bbe4-c2a24a419f05 is in state STARTED 2025-10-08 15:55:17.165151 | orchestrator | 2025-10-08 15:55:17 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 15:55:17.165163 | orchestrator | 2025-10-08 15:55:17 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:55:20.134235 | orchestrator | 2025-10-08 15:55:20 | INFO  | Task af272394-32d4-40a7-b33d-073343bea7a9 is in state STARTED 2025-10-08 15:55:20.134347 | orchestrator | 2025-10-08 15:55:20 | INFO  | Task 9b68a156-a003-4645-b7a2-40c5902f991a is in state STARTED 2025-10-08 15:55:20.135111 | orchestrator | 2025-10-08 15:55:20 | INFO  | Task 756195b4-1aff-413c-bbe4-c2a24a419f05 is in state STARTED 2025-10-08 15:55:20.135665 | orchestrator | 2025-10-08 15:55:20 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 15:55:20.135689 | orchestrator | 2025-10-08 15:55:20 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:55:23.158599 | orchestrator | 2025-10-08 15:55:23 | INFO  | Task af272394-32d4-40a7-b33d-073343bea7a9 is in state STARTED 2025-10-08 15:55:23.159463 | orchestrator | 2025-10-08 15:55:23 | INFO  | Task 9b68a156-a003-4645-b7a2-40c5902f991a is in state STARTED 2025-10-08 15:55:23.160083 | orchestrator | 2025-10-08 15:55:23 | INFO  | Task 756195b4-1aff-413c-bbe4-c2a24a419f05 is in state STARTED 2025-10-08 15:55:23.160761 | orchestrator | 2025-10-08 15:55:23 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 15:55:23.160788 | orchestrator | 2025-10-08 15:55:23 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:55:26.202364 | orchestrator | 2025-10-08 15:55:26 | INFO  | Task af272394-32d4-40a7-b33d-073343bea7a9 is in state STARTED 2025-10-08 15:55:26.202792 | orchestrator | 2025-10-08 15:55:26 | INFO  | Task 9b68a156-a003-4645-b7a2-40c5902f991a is in state STARTED 2025-10-08 15:55:26.204898 | orchestrator | 2025-10-08 15:55:26 | INFO  | Task 756195b4-1aff-413c-bbe4-c2a24a419f05 is in state STARTED 2025-10-08 15:55:26.205219 | orchestrator | 2025-10-08 15:55:26 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 15:55:26.205256 | orchestrator | 2025-10-08 15:55:26 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:55:29.237045 | orchestrator | 2025-10-08 15:55:29 | INFO  | Task af272394-32d4-40a7-b33d-073343bea7a9 is in state STARTED 2025-10-08 15:55:29.237323 | orchestrator | 2025-10-08 15:55:29 | INFO  | Task 9b68a156-a003-4645-b7a2-40c5902f991a is in state STARTED 2025-10-08 15:55:29.238072 | orchestrator | 2025-10-08 15:55:29 | INFO  | Task 756195b4-1aff-413c-bbe4-c2a24a419f05 is in state STARTED 2025-10-08 15:55:29.238968 | orchestrator | 2025-10-08 15:55:29 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 15:55:29.238992 | orchestrator | 2025-10-08 15:55:29 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:55:32.266170 | orchestrator | 2025-10-08 15:55:32 | INFO  | Task af272394-32d4-40a7-b33d-073343bea7a9 is in state STARTED 2025-10-08 15:55:32.267701 | orchestrator | 2025-10-08 15:55:32 | INFO  | Task 9b68a156-a003-4645-b7a2-40c5902f991a is in state STARTED 2025-10-08 15:55:32.267811 | orchestrator | 2025-10-08 15:55:32 | INFO  | Task 756195b4-1aff-413c-bbe4-c2a24a419f05 is in state STARTED 2025-10-08 15:55:32.269338 | orchestrator | 2025-10-08 15:55:32 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 15:55:32.269361 | orchestrator | 2025-10-08 15:55:32 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:55:35.324303 | orchestrator | 2025-10-08 15:55:35 | INFO  | Task af272394-32d4-40a7-b33d-073343bea7a9 is in state STARTED 2025-10-08 15:55:35.325442 | orchestrator | 2025-10-08 15:55:35 | INFO  | Task 9b68a156-a003-4645-b7a2-40c5902f991a is in state STARTED 2025-10-08 15:55:35.329243 | orchestrator | 2025-10-08 15:55:35 | INFO  | Task 756195b4-1aff-413c-bbe4-c2a24a419f05 is in state STARTED 2025-10-08 15:55:35.332752 | orchestrator | 2025-10-08 15:55:35 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 15:55:35.332779 | orchestrator | 2025-10-08 15:55:35 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:55:38.473091 | orchestrator | 2025-10-08 15:55:38 | INFO  | Task af272394-32d4-40a7-b33d-073343bea7a9 is in state STARTED 2025-10-08 15:55:38.473325 | orchestrator | 2025-10-08 15:55:38 | INFO  | Task 9b68a156-a003-4645-b7a2-40c5902f991a is in state STARTED 2025-10-08 15:55:38.474174 | orchestrator | 2025-10-08 15:55:38 | INFO  | Task 756195b4-1aff-413c-bbe4-c2a24a419f05 is in state STARTED 2025-10-08 15:55:38.474552 | orchestrator | 2025-10-08 15:55:38 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 15:55:38.474641 | orchestrator | 2025-10-08 15:55:38 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:55:41.500272 | orchestrator | 2025-10-08 15:55:41 | INFO  | Task af272394-32d4-40a7-b33d-073343bea7a9 is in state STARTED 2025-10-08 15:55:41.502588 | orchestrator | 2025-10-08 15:55:41 | INFO  | Task 9b68a156-a003-4645-b7a2-40c5902f991a is in state STARTED 2025-10-08 15:55:41.503863 | orchestrator | 2025-10-08 15:55:41 | INFO  | Task 756195b4-1aff-413c-bbe4-c2a24a419f05 is in state STARTED 2025-10-08 15:55:41.504859 | orchestrator | 2025-10-08 15:55:41 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 15:55:41.504884 | orchestrator | 2025-10-08 15:55:41 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:55:44.531331 | orchestrator | 2025-10-08 15:55:44 | INFO  | Task af272394-32d4-40a7-b33d-073343bea7a9 is in state STARTED 2025-10-08 15:55:44.531437 | orchestrator | 2025-10-08 15:55:44 | INFO  | Task 9b68a156-a003-4645-b7a2-40c5902f991a is in state STARTED 2025-10-08 15:55:44.532007 | orchestrator | 2025-10-08 15:55:44 | INFO  | Task 756195b4-1aff-413c-bbe4-c2a24a419f05 is in state STARTED 2025-10-08 15:55:44.532748 | orchestrator | 2025-10-08 15:55:44 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 15:55:44.532795 | orchestrator | 2025-10-08 15:55:44 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:55:47.576196 | orchestrator | 2025-10-08 15:55:47 | INFO  | Task af272394-32d4-40a7-b33d-073343bea7a9 is in state STARTED 2025-10-08 15:55:47.579297 | orchestrator | 2025-10-08 15:55:47 | INFO  | Task 9b68a156-a003-4645-b7a2-40c5902f991a is in state STARTED 2025-10-08 15:55:47.581366 | orchestrator | 2025-10-08 15:55:47 | INFO  | Task 756195b4-1aff-413c-bbe4-c2a24a419f05 is in state STARTED 2025-10-08 15:55:47.583357 | orchestrator | 2025-10-08 15:55:47 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 15:55:47.583381 | orchestrator | 2025-10-08 15:55:47 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:55:50.643848 | orchestrator | 2025-10-08 15:55:50 | INFO  | Task af272394-32d4-40a7-b33d-073343bea7a9 is in state STARTED 2025-10-08 15:55:50.643955 | orchestrator | 2025-10-08 15:55:50 | INFO  | Task 9b68a156-a003-4645-b7a2-40c5902f991a is in state STARTED 2025-10-08 15:55:50.643970 | orchestrator | 2025-10-08 15:55:50 | INFO  | Task 756195b4-1aff-413c-bbe4-c2a24a419f05 is in state STARTED 2025-10-08 15:55:50.643982 | orchestrator | 2025-10-08 15:55:50 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 15:55:50.644013 | orchestrator | 2025-10-08 15:55:50 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:55:53.675506 | orchestrator | 2025-10-08 15:55:53 | INFO  | Task af272394-32d4-40a7-b33d-073343bea7a9 is in state STARTED 2025-10-08 15:55:53.677335 | orchestrator | 2025-10-08 15:55:53 | INFO  | Task 9b68a156-a003-4645-b7a2-40c5902f991a is in state STARTED 2025-10-08 15:55:53.678927 | orchestrator | 2025-10-08 15:55:53 | INFO  | Task 756195b4-1aff-413c-bbe4-c2a24a419f05 is in state STARTED 2025-10-08 15:55:53.680363 | orchestrator | 2025-10-08 15:55:53 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 15:55:53.680384 | orchestrator | 2025-10-08 15:55:53 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:55:56.731324 | orchestrator | 2025-10-08 15:55:56 | INFO  | Task af272394-32d4-40a7-b33d-073343bea7a9 is in state STARTED 2025-10-08 15:55:56.732790 | orchestrator | 2025-10-08 15:55:56 | INFO  | Task 9b68a156-a003-4645-b7a2-40c5902f991a is in state STARTED 2025-10-08 15:55:56.735872 | orchestrator | 2025-10-08 15:55:56 | INFO  | Task 756195b4-1aff-413c-bbe4-c2a24a419f05 is in state STARTED 2025-10-08 15:55:56.738136 | orchestrator | 2025-10-08 15:55:56 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 15:55:56.738258 | orchestrator | 2025-10-08 15:55:56 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:55:59.776962 | orchestrator | 2025-10-08 15:55:59 | INFO  | Task af272394-32d4-40a7-b33d-073343bea7a9 is in state STARTED 2025-10-08 15:55:59.778445 | orchestrator | 2025-10-08 15:55:59 | INFO  | Task 9b68a156-a003-4645-b7a2-40c5902f991a is in state STARTED 2025-10-08 15:55:59.780290 | orchestrator | 2025-10-08 15:55:59 | INFO  | Task 756195b4-1aff-413c-bbe4-c2a24a419f05 is in state STARTED 2025-10-08 15:55:59.782003 | orchestrator | 2025-10-08 15:55:59 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 15:55:59.782270 | orchestrator | 2025-10-08 15:55:59 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:56:02.820297 | orchestrator | 2025-10-08 15:56:02 | INFO  | Task af272394-32d4-40a7-b33d-073343bea7a9 is in state STARTED 2025-10-08 15:56:02.822305 | orchestrator | 2025-10-08 15:56:02 | INFO  | Task 9b68a156-a003-4645-b7a2-40c5902f991a is in state STARTED 2025-10-08 15:56:02.824449 | orchestrator | 2025-10-08 15:56:02 | INFO  | Task 756195b4-1aff-413c-bbe4-c2a24a419f05 is in state STARTED 2025-10-08 15:56:02.826576 | orchestrator | 2025-10-08 15:56:02 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 15:56:02.826601 | orchestrator | 2025-10-08 15:56:02 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:56:05.868477 | orchestrator | 2025-10-08 15:56:05 | INFO  | Task af272394-32d4-40a7-b33d-073343bea7a9 is in state STARTED 2025-10-08 15:56:05.868548 | orchestrator | 2025-10-08 15:56:05 | INFO  | Task 9b68a156-a003-4645-b7a2-40c5902f991a is in state STARTED 2025-10-08 15:56:05.869269 | orchestrator | 2025-10-08 15:56:05 | INFO  | Task 756195b4-1aff-413c-bbe4-c2a24a419f05 is in state STARTED 2025-10-08 15:56:05.870190 | orchestrator | 2025-10-08 15:56:05 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 15:56:05.870377 | orchestrator | 2025-10-08 15:56:05 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:56:08.906545 | orchestrator | 2025-10-08 15:56:08 | INFO  | Task af272394-32d4-40a7-b33d-073343bea7a9 is in state STARTED 2025-10-08 15:56:08.909491 | orchestrator | 2025-10-08 15:56:08 | INFO  | Task 9b68a156-a003-4645-b7a2-40c5902f991a is in state STARTED 2025-10-08 15:56:08.911985 | orchestrator | 2025-10-08 15:56:08 | INFO  | Task 756195b4-1aff-413c-bbe4-c2a24a419f05 is in state STARTED 2025-10-08 15:56:08.913409 | orchestrator | 2025-10-08 15:56:08 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 15:56:08.913492 | orchestrator | 2025-10-08 15:56:08 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:56:11.960738 | orchestrator | 2025-10-08 15:56:11 | INFO  | Task af272394-32d4-40a7-b33d-073343bea7a9 is in state STARTED 2025-10-08 15:56:11.961846 | orchestrator | 2025-10-08 15:56:11 | INFO  | Task 9b68a156-a003-4645-b7a2-40c5902f991a is in state STARTED 2025-10-08 15:56:11.964227 | orchestrator | 2025-10-08 15:56:11 | INFO  | Task 756195b4-1aff-413c-bbe4-c2a24a419f05 is in state STARTED 2025-10-08 15:56:11.967936 | orchestrator | 2025-10-08 15:56:11 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 15:56:11.968060 | orchestrator | 2025-10-08 15:56:11 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:56:15.014948 | orchestrator | 2025-10-08 15:56:15 | INFO  | Task af272394-32d4-40a7-b33d-073343bea7a9 is in state STARTED 2025-10-08 15:56:15.017481 | orchestrator | 2025-10-08 15:56:15 | INFO  | Task 9b68a156-a003-4645-b7a2-40c5902f991a is in state STARTED 2025-10-08 15:56:15.022520 | orchestrator | 2025-10-08 15:56:15 | INFO  | Task 756195b4-1aff-413c-bbe4-c2a24a419f05 is in state STARTED 2025-10-08 15:56:15.024972 | orchestrator | 2025-10-08 15:56:15 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 15:56:15.025087 | orchestrator | 2025-10-08 15:56:15 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:56:18.082766 | orchestrator | 2025-10-08 15:56:18 | INFO  | Task af272394-32d4-40a7-b33d-073343bea7a9 is in state STARTED 2025-10-08 15:56:18.084426 | orchestrator | 2025-10-08 15:56:18 | INFO  | Task 9b68a156-a003-4645-b7a2-40c5902f991a is in state STARTED 2025-10-08 15:56:18.087411 | orchestrator | 2025-10-08 15:56:18 | INFO  | Task 756195b4-1aff-413c-bbe4-c2a24a419f05 is in state STARTED 2025-10-08 15:56:18.115340 | orchestrator | 2025-10-08 15:56:18 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 15:56:18.115390 | orchestrator | 2025-10-08 15:56:18 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:56:21.143800 | orchestrator | 2025-10-08 15:56:21 | INFO  | Task af272394-32d4-40a7-b33d-073343bea7a9 is in state STARTED 2025-10-08 15:56:21.143930 | orchestrator | 2025-10-08 15:56:21 | INFO  | Task 9b68a156-a003-4645-b7a2-40c5902f991a is in state STARTED 2025-10-08 15:56:21.144580 | orchestrator | 2025-10-08 15:56:21 | INFO  | Task 756195b4-1aff-413c-bbe4-c2a24a419f05 is in state STARTED 2025-10-08 15:56:21.145339 | orchestrator | 2025-10-08 15:56:21 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 15:56:21.145366 | orchestrator | 2025-10-08 15:56:21 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:56:24.174370 | orchestrator | 2025-10-08 15:56:24 | INFO  | Task af272394-32d4-40a7-b33d-073343bea7a9 is in state STARTED 2025-10-08 15:56:24.174476 | orchestrator | 2025-10-08 15:56:24 | INFO  | Task 9b68a156-a003-4645-b7a2-40c5902f991a is in state STARTED 2025-10-08 15:56:24.174501 | orchestrator | 2025-10-08 15:56:24 | INFO  | Task 756195b4-1aff-413c-bbe4-c2a24a419f05 is in state STARTED 2025-10-08 15:56:24.175022 | orchestrator | 2025-10-08 15:56:24 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 15:56:24.175046 | orchestrator | 2025-10-08 15:56:24 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:56:27.205415 | orchestrator | 2025-10-08 15:56:27 | INFO  | Task af272394-32d4-40a7-b33d-073343bea7a9 is in state STARTED 2025-10-08 15:56:27.206734 | orchestrator | 2025-10-08 15:56:27 | INFO  | Task 9b68a156-a003-4645-b7a2-40c5902f991a is in state STARTED 2025-10-08 15:56:27.207253 | orchestrator | 2025-10-08 15:56:27 | INFO  | Task 756195b4-1aff-413c-bbe4-c2a24a419f05 is in state STARTED 2025-10-08 15:56:27.209954 | orchestrator | 2025-10-08 15:56:27 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 15:56:27.209978 | orchestrator | 2025-10-08 15:56:27 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:56:30.268196 | orchestrator | 2025-10-08 15:56:30 | INFO  | Task af272394-32d4-40a7-b33d-073343bea7a9 is in state STARTED 2025-10-08 15:56:30.272257 | orchestrator | 2025-10-08 15:56:30 | INFO  | Task 9b68a156-a003-4645-b7a2-40c5902f991a is in state STARTED 2025-10-08 15:56:30.276006 | orchestrator | 2025-10-08 15:56:30 | INFO  | Task 756195b4-1aff-413c-bbe4-c2a24a419f05 is in state STARTED 2025-10-08 15:56:30.280952 | orchestrator | 2025-10-08 15:56:30 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 15:56:30.304035 | orchestrator | 2025-10-08 15:56:30 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:56:33.331301 | orchestrator | 2025-10-08 15:56:33 | INFO  | Task af272394-32d4-40a7-b33d-073343bea7a9 is in state STARTED 2025-10-08 15:56:33.331583 | orchestrator | 2025-10-08 15:56:33 | INFO  | Task 9b68a156-a003-4645-b7a2-40c5902f991a is in state STARTED 2025-10-08 15:56:33.332286 | orchestrator | 2025-10-08 15:56:33 | INFO  | Task 756195b4-1aff-413c-bbe4-c2a24a419f05 is in state STARTED 2025-10-08 15:56:33.333052 | orchestrator | 2025-10-08 15:56:33 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 15:56:33.333185 | orchestrator | 2025-10-08 15:56:33 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:56:36.366428 | orchestrator | 2025-10-08 15:56:36 | INFO  | Task af272394-32d4-40a7-b33d-073343bea7a9 is in state STARTED 2025-10-08 15:56:36.368840 | orchestrator | 2025-10-08 15:56:36 | INFO  | Task 9b68a156-a003-4645-b7a2-40c5902f991a is in state STARTED 2025-10-08 15:56:36.370408 | orchestrator | 2025-10-08 15:56:36 | INFO  | Task 756195b4-1aff-413c-bbe4-c2a24a419f05 is in state STARTED 2025-10-08 15:56:36.372025 | orchestrator | 2025-10-08 15:56:36 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 15:56:36.372071 | orchestrator | 2025-10-08 15:56:36 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:56:39.411211 | orchestrator | 2025-10-08 15:56:39 | INFO  | Task af272394-32d4-40a7-b33d-073343bea7a9 is in state STARTED 2025-10-08 15:56:39.412264 | orchestrator | 2025-10-08 15:56:39 | INFO  | Task 9b68a156-a003-4645-b7a2-40c5902f991a is in state STARTED 2025-10-08 15:56:39.412915 | orchestrator | 2025-10-08 15:56:39 | INFO  | Task 756195b4-1aff-413c-bbe4-c2a24a419f05 is in state STARTED 2025-10-08 15:56:39.413720 | orchestrator | 2025-10-08 15:56:39 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 15:56:39.413744 | orchestrator | 2025-10-08 15:56:39 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:56:42.497220 | orchestrator | 2025-10-08 15:56:42 | INFO  | Task af272394-32d4-40a7-b33d-073343bea7a9 is in state STARTED 2025-10-08 15:56:42.498165 | orchestrator | 2025-10-08 15:56:42 | INFO  | Task 9b68a156-a003-4645-b7a2-40c5902f991a is in state STARTED 2025-10-08 15:56:42.499577 | orchestrator | 2025-10-08 15:56:42 | INFO  | Task 756195b4-1aff-413c-bbe4-c2a24a419f05 is in state STARTED 2025-10-08 15:56:42.501324 | orchestrator | 2025-10-08 15:56:42 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 15:56:42.502353 | orchestrator | 2025-10-08 15:56:42 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:56:45.548757 | orchestrator | 2025-10-08 15:56:45 | INFO  | Task af272394-32d4-40a7-b33d-073343bea7a9 is in state STARTED 2025-10-08 15:56:45.550410 | orchestrator | 2025-10-08 15:56:45 | INFO  | Task 9b68a156-a003-4645-b7a2-40c5902f991a is in state STARTED 2025-10-08 15:56:45.551514 | orchestrator | 2025-10-08 15:56:45 | INFO  | Task 756195b4-1aff-413c-bbe4-c2a24a419f05 is in state STARTED 2025-10-08 15:56:45.553182 | orchestrator | 2025-10-08 15:56:45 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 15:56:45.553210 | orchestrator | 2025-10-08 15:56:45 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:56:48.624111 | orchestrator | 2025-10-08 15:56:48 | INFO  | Task af272394-32d4-40a7-b33d-073343bea7a9 is in state STARTED 2025-10-08 15:56:48.625538 | orchestrator | 2025-10-08 15:56:48 | INFO  | Task 9b68a156-a003-4645-b7a2-40c5902f991a is in state STARTED 2025-10-08 15:56:48.627744 | orchestrator | 2025-10-08 15:56:48 | INFO  | Task 756195b4-1aff-413c-bbe4-c2a24a419f05 is in state STARTED 2025-10-08 15:56:48.631617 | orchestrator | 2025-10-08 15:56:48 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 15:56:48.632616 | orchestrator | 2025-10-08 15:56:48 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:56:51.695073 | orchestrator | 2025-10-08 15:56:51 | INFO  | Task af272394-32d4-40a7-b33d-073343bea7a9 is in state STARTED 2025-10-08 15:56:51.697669 | orchestrator | 2025-10-08 15:56:51 | INFO  | Task 9b68a156-a003-4645-b7a2-40c5902f991a is in state SUCCESS 2025-10-08 15:56:51.700314 | orchestrator | 2025-10-08 15:56:51.700426 | orchestrator | 2025-10-08 15:56:51.700441 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-10-08 15:56:51.700454 | orchestrator | 2025-10-08 15:56:51.700465 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-10-08 15:56:51.700478 | orchestrator | Wednesday 08 October 2025 15:53:52 +0000 (0:00:00.286) 0:00:00.286 ***** 2025-10-08 15:56:51.700489 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:56:51.700501 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:56:51.700512 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:56:51.700523 | orchestrator | 2025-10-08 15:56:51.700534 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-10-08 15:56:51.700545 | orchestrator | Wednesday 08 October 2025 15:53:52 +0000 (0:00:00.260) 0:00:00.546 ***** 2025-10-08 15:56:51.700579 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2025-10-08 15:56:51.700591 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2025-10-08 15:56:51.700601 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2025-10-08 15:56:51.700612 | orchestrator | 2025-10-08 15:56:51.700623 | orchestrator | PLAY [Apply role glance] ******************************************************* 2025-10-08 15:56:51.700634 | orchestrator | 2025-10-08 15:56:51.700645 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-10-08 15:56:51.700656 | orchestrator | Wednesday 08 October 2025 15:53:53 +0000 (0:00:00.455) 0:00:01.002 ***** 2025-10-08 15:56:51.700681 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-08 15:56:51.700701 | orchestrator | 2025-10-08 15:56:51.700719 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2025-10-08 15:56:51.700737 | orchestrator | Wednesday 08 October 2025 15:53:54 +0000 (0:00:01.378) 0:00:02.381 ***** 2025-10-08 15:56:51.700754 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2025-10-08 15:56:51.700772 | orchestrator | 2025-10-08 15:56:51.700790 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2025-10-08 15:56:51.700845 | orchestrator | Wednesday 08 October 2025 15:53:59 +0000 (0:00:05.170) 0:00:07.552 ***** 2025-10-08 15:56:51.700857 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2025-10-08 15:56:51.700869 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2025-10-08 15:56:51.700880 | orchestrator | 2025-10-08 15:56:51.700891 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2025-10-08 15:56:51.700901 | orchestrator | Wednesday 08 October 2025 15:54:06 +0000 (0:00:06.672) 0:00:14.225 ***** 2025-10-08 15:56:51.700912 | orchestrator | changed: [testbed-node-0] => (item=service) 2025-10-08 15:56:51.700923 | orchestrator | 2025-10-08 15:56:51.700933 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2025-10-08 15:56:51.700944 | orchestrator | Wednesday 08 October 2025 15:54:09 +0000 (0:00:03.506) 0:00:17.732 ***** 2025-10-08 15:56:51.700955 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-10-08 15:56:51.700966 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2025-10-08 15:56:51.700977 | orchestrator | 2025-10-08 15:56:51.700990 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2025-10-08 15:56:51.701009 | orchestrator | Wednesday 08 October 2025 15:54:13 +0000 (0:00:03.990) 0:00:21.722 ***** 2025-10-08 15:56:51.701028 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-10-08 15:56:51.701079 | orchestrator | 2025-10-08 15:56:51.701101 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2025-10-08 15:56:51.701144 | orchestrator | Wednesday 08 October 2025 15:54:17 +0000 (0:00:03.323) 0:00:25.046 ***** 2025-10-08 15:56:51.701164 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2025-10-08 15:56:51.701182 | orchestrator | 2025-10-08 15:56:51.701201 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2025-10-08 15:56:51.701219 | orchestrator | Wednesday 08 October 2025 15:54:21 +0000 (0:00:04.599) 0:00:29.645 ***** 2025-10-08 15:56:51.701269 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-10-08 15:56:51.701323 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-10-08 15:56:51.701338 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-10-08 15:56:51.701357 | orchestrator | 2025-10-08 15:56:51.701368 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-10-08 15:56:51.701379 | orchestrator | Wednesday 08 October 2025 15:54:28 +0000 (0:00:06.480) 0:00:36.125 ***** 2025-10-08 15:56:51.701426 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-08 15:56:51.701439 | orchestrator | 2025-10-08 15:56:51.701459 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2025-10-08 15:56:51.701470 | orchestrator | Wednesday 08 October 2025 15:54:28 +0000 (0:00:00.599) 0:00:36.725 ***** 2025-10-08 15:56:51.701481 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:56:51.701493 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:56:51.701504 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:56:51.701514 | orchestrator | 2025-10-08 15:56:51.701525 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2025-10-08 15:56:51.701536 | orchestrator | Wednesday 08 October 2025 15:54:32 +0000 (0:00:03.690) 0:00:40.415 ***** 2025-10-08 15:56:51.701547 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-10-08 15:56:51.701558 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-10-08 15:56:51.701569 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-10-08 15:56:51.701580 | orchestrator | 2025-10-08 15:56:51.701591 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2025-10-08 15:56:51.701602 | orchestrator | Wednesday 08 October 2025 15:54:33 +0000 (0:00:01.416) 0:00:41.832 ***** 2025-10-08 15:56:51.701618 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-10-08 15:56:51.701630 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-10-08 15:56:51.701641 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-10-08 15:56:51.701651 | orchestrator | 2025-10-08 15:56:51.701662 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2025-10-08 15:56:51.701673 | orchestrator | Wednesday 08 October 2025 15:54:35 +0000 (0:00:01.111) 0:00:42.943 ***** 2025-10-08 15:56:51.701683 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:56:51.701694 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:56:51.701705 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:56:51.701716 | orchestrator | 2025-10-08 15:56:51.701727 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2025-10-08 15:56:51.701737 | orchestrator | Wednesday 08 October 2025 15:54:35 +0000 (0:00:00.664) 0:00:43.608 ***** 2025-10-08 15:56:51.701748 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:56:51.701759 | orchestrator | 2025-10-08 15:56:51.701770 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2025-10-08 15:56:51.701780 | orchestrator | Wednesday 08 October 2025 15:54:36 +0000 (0:00:00.263) 0:00:43.872 ***** 2025-10-08 15:56:51.701791 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:56:51.701802 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:56:51.701813 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:56:51.701824 | orchestrator | 2025-10-08 15:56:51.701834 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-10-08 15:56:51.701845 | orchestrator | Wednesday 08 October 2025 15:54:36 +0000 (0:00:00.279) 0:00:44.152 ***** 2025-10-08 15:56:51.701863 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-08 15:56:51.701874 | orchestrator | 2025-10-08 15:56:51.701884 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2025-10-08 15:56:51.701895 | orchestrator | Wednesday 08 October 2025 15:54:36 +0000 (0:00:00.547) 0:00:44.699 ***** 2025-10-08 15:56:51.701913 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-10-08 15:56:51.701933 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-10-08 15:56:51.701945 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-10-08 15:56:51.701965 | orchestrator | 2025-10-08 15:56:51.701977 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2025-10-08 15:56:51.701987 | orchestrator | Wednesday 08 October 2025 15:54:40 +0000 (0:00:03.976) 0:00:48.676 ***** 2025-10-08 15:56:51.702063 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-10-08 15:56:51.702080 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:56:51.702093 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-10-08 15:56:51.702113 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:56:51.702166 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-10-08 15:56:51.702179 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:56:51.702191 | orchestrator | 2025-10-08 15:56:51.702201 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2025-10-08 15:56:51.702212 | orchestrator | Wednesday 08 October 2025 15:54:44 +0000 (0:00:03.852) 0:00:52.529 ***** 2025-10-08 15:56:51.702230 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-10-08 15:56:51.702250 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:56:51.702270 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-10-08 15:56:51.702282 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:56:51.702299 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-10-08 15:56:51.702324 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:56:51.702335 | orchestrator | 2025-10-08 15:56:51.702346 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2025-10-08 15:56:51.702357 | orchestrator | Wednesday 08 October 2025 15:54:47 +0000 (0:00:03.150) 0:00:55.679 ***** 2025-10-08 15:56:51.702367 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:56:51.702378 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:56:51.702389 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:56:51.702400 | orchestrator | 2025-10-08 15:56:51.702411 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2025-10-08 15:56:51.702422 | orchestrator | Wednesday 08 October 2025 15:54:50 +0000 (0:00:02.934) 0:00:58.614 ***** 2025-10-08 15:56:51.702439 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-10-08 15:56:51.702457 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-10-08 15:56:51.702477 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-10-08 15:56:51.702489 | orchestrator | 2025-10-08 15:56:51.702500 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2025-10-08 15:56:51.702510 | orchestrator | Wednesday 08 October 2025 15:54:54 +0000 (0:00:04.034) 0:01:02.648 ***** 2025-10-08 15:56:51.702521 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:56:51.702532 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:56:51.702543 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:56:51.702554 | orchestrator | 2025-10-08 15:56:51.702564 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2025-10-08 15:56:51.702575 | orchestrator | Wednesday 08 October 2025 15:55:01 +0000 (0:00:06.688) 0:01:09.337 ***** 2025-10-08 15:56:51.702586 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:56:51.702596 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:56:51.702607 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:56:51.702618 | orchestrator | 2025-10-08 15:56:51.702629 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2025-10-08 15:56:51.702646 | orchestrator | Wednesday 08 October 2025 15:55:06 +0000 (0:00:05.224) 0:01:14.561 ***** 2025-10-08 15:56:51.702658 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:56:51.702669 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:56:51.702680 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:56:51.702690 | orchestrator | 2025-10-08 15:56:51.702701 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2025-10-08 15:56:51.702712 | orchestrator | Wednesday 08 October 2025 15:55:11 +0000 (0:00:04.539) 0:01:19.100 ***** 2025-10-08 15:56:51.702723 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:56:51.702734 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:56:51.702751 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:56:51.702762 | orchestrator | 2025-10-08 15:56:51.702772 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2025-10-08 15:56:51.702783 | orchestrator | Wednesday 08 October 2025 15:55:15 +0000 (0:00:03.802) 0:01:22.903 ***** 2025-10-08 15:56:51.702794 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:56:51.702805 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:56:51.702815 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:56:51.702826 | orchestrator | 2025-10-08 15:56:51.702837 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2025-10-08 15:56:51.702847 | orchestrator | Wednesday 08 October 2025 15:55:20 +0000 (0:00:05.229) 0:01:28.133 ***** 2025-10-08 15:56:51.702858 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:56:51.702869 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:56:51.702884 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:56:51.702895 | orchestrator | 2025-10-08 15:56:51.702906 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2025-10-08 15:56:51.702917 | orchestrator | Wednesday 08 October 2025 15:55:20 +0000 (0:00:00.267) 0:01:28.400 ***** 2025-10-08 15:56:51.702927 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-10-08 15:56:51.702938 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:56:51.702949 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-10-08 15:56:51.702960 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:56:51.702971 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-10-08 15:56:51.703037 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:56:51.703050 | orchestrator | 2025-10-08 15:56:51.703061 | orchestrator | TASK [glance : Check glance containers] **************************************** 2025-10-08 15:56:51.703072 | orchestrator | Wednesday 08 October 2025 15:55:24 +0000 (0:00:03.744) 0:01:32.145 ***** 2025-10-08 15:56:51.703267 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-10-08 15:56:51.703303 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-10-08 15:56:51.703350 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-10-08 15:56:51.703364 | orchestrator | 2025-10-08 15:56:51.703375 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-10-08 15:56:51.703386 | orchestrator | Wednesday 08 October 2025 15:55:30 +0000 (0:00:05.954) 0:01:38.100 ***** 2025-10-08 15:56:51.703397 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:56:51.703407 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:56:51.703418 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:56:51.703429 | orchestrator | 2025-10-08 15:56:51.703440 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2025-10-08 15:56:51.703451 | orchestrator | Wednesday 08 October 2025 15:55:30 +0000 (0:00:00.590) 0:01:38.690 ***** 2025-10-08 15:56:51.703462 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:56:51.703473 | orchestrator | 2025-10-08 15:56:51.703484 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2025-10-08 15:56:51.703504 | orchestrator | Wednesday 08 October 2025 15:55:33 +0000 (0:00:02.613) 0:01:41.304 ***** 2025-10-08 15:56:51.703543 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:56:51.703554 | orchestrator | 2025-10-08 15:56:51.703565 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2025-10-08 15:56:51.703576 | orchestrator | Wednesday 08 October 2025 15:55:35 +0000 (0:00:02.456) 0:01:43.760 ***** 2025-10-08 15:56:51.703587 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:56:51.703598 | orchestrator | 2025-10-08 15:56:51.703608 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2025-10-08 15:56:51.703619 | orchestrator | Wednesday 08 October 2025 15:55:38 +0000 (0:00:02.417) 0:01:46.177 ***** 2025-10-08 15:56:51.703630 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:56:51.703641 | orchestrator | 2025-10-08 15:56:51.703651 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2025-10-08 15:56:51.703662 | orchestrator | Wednesday 08 October 2025 15:56:11 +0000 (0:00:32.839) 0:02:19.017 ***** 2025-10-08 15:56:51.703673 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:56:51.703684 | orchestrator | 2025-10-08 15:56:51.703700 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-10-08 15:56:51.703710 | orchestrator | Wednesday 08 October 2025 15:56:13 +0000 (0:00:02.179) 0:02:21.197 ***** 2025-10-08 15:56:51.703720 | orchestrator | 2025-10-08 15:56:51.703729 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-10-08 15:56:51.703739 | orchestrator | Wednesday 08 October 2025 15:56:13 +0000 (0:00:00.067) 0:02:21.264 ***** 2025-10-08 15:56:51.703748 | orchestrator | 2025-10-08 15:56:51.703758 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-10-08 15:56:51.703767 | orchestrator | Wednesday 08 October 2025 15:56:13 +0000 (0:00:00.062) 0:02:21.327 ***** 2025-10-08 15:56:51.703777 | orchestrator | 2025-10-08 15:56:51.703787 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2025-10-08 15:56:51.703796 | orchestrator | Wednesday 08 October 2025 15:56:13 +0000 (0:00:00.068) 0:02:21.395 ***** 2025-10-08 15:56:51.703806 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:56:51.703815 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:56:51.703825 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:56:51.703835 | orchestrator | 2025-10-08 15:56:51.703844 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-08 15:56:51.703855 | orchestrator | testbed-node-0 : ok=26  changed=19  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-10-08 15:56:51.703871 | orchestrator | testbed-node-1 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-10-08 15:56:51.703880 | orchestrator | testbed-node-2 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-10-08 15:56:51.703890 | orchestrator | 2025-10-08 15:56:51.703900 | orchestrator | 2025-10-08 15:56:51.703910 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-08 15:56:51.703919 | orchestrator | Wednesday 08 October 2025 15:56:51 +0000 (0:00:37.690) 0:02:59.086 ***** 2025-10-08 15:56:51.703929 | orchestrator | =============================================================================== 2025-10-08 15:56:51.703939 | orchestrator | glance : Restart glance-api container ---------------------------------- 37.69s 2025-10-08 15:56:51.703969 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 32.84s 2025-10-08 15:56:51.704007 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 6.69s 2025-10-08 15:56:51.704017 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 6.67s 2025-10-08 15:56:51.704027 | orchestrator | glance : Ensuring config directories exist ------------------------------ 6.48s 2025-10-08 15:56:51.704037 | orchestrator | glance : Check glance containers ---------------------------------------- 5.95s 2025-10-08 15:56:51.704053 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 5.23s 2025-10-08 15:56:51.704101 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 5.22s 2025-10-08 15:56:51.704154 | orchestrator | service-ks-register : glance | Creating services ------------------------ 5.17s 2025-10-08 15:56:51.704164 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 4.60s 2025-10-08 15:56:51.704174 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 4.54s 2025-10-08 15:56:51.704251 | orchestrator | glance : Copying over config.json files for services -------------------- 4.03s 2025-10-08 15:56:51.704263 | orchestrator | service-ks-register : glance | Creating users --------------------------- 3.99s 2025-10-08 15:56:51.704272 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 3.98s 2025-10-08 15:56:51.704282 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS certificate --- 3.85s 2025-10-08 15:56:51.704312 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 3.80s 2025-10-08 15:56:51.704324 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 3.74s 2025-10-08 15:56:51.704333 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 3.69s 2025-10-08 15:56:51.704343 | orchestrator | service-ks-register : glance | Creating projects ------------------------ 3.51s 2025-10-08 15:56:51.704352 | orchestrator | service-ks-register : glance | Creating roles --------------------------- 3.32s 2025-10-08 15:56:51.704362 | orchestrator | 2025-10-08 15:56:51.704372 | orchestrator | 2025-10-08 15:56:51 | INFO  | Task 756195b4-1aff-413c-bbe4-c2a24a419f05 is in state SUCCESS 2025-10-08 15:56:51.704897 | orchestrator | 2025-10-08 15:56:51.704918 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-10-08 15:56:51.704928 | orchestrator | 2025-10-08 15:56:51.704938 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-10-08 15:56:51.704948 | orchestrator | Wednesday 08 October 2025 15:53:44 +0000 (0:00:00.252) 0:00:00.252 ***** 2025-10-08 15:56:51.704958 | orchestrator | ok: [testbed-manager] 2025-10-08 15:56:51.704968 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:56:51.704978 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:56:51.705048 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:56:51.705201 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:56:51.705214 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:56:51.705224 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:56:51.705234 | orchestrator | 2025-10-08 15:56:51.705244 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-10-08 15:56:51.705254 | orchestrator | Wednesday 08 October 2025 15:53:45 +0000 (0:00:00.792) 0:00:01.044 ***** 2025-10-08 15:56:51.705263 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2025-10-08 15:56:51.705273 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2025-10-08 15:56:51.705283 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2025-10-08 15:56:51.705293 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2025-10-08 15:56:51.705302 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2025-10-08 15:56:51.705311 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2025-10-08 15:56:51.705321 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2025-10-08 15:56:51.705330 | orchestrator | 2025-10-08 15:56:51.705340 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2025-10-08 15:56:51.705349 | orchestrator | 2025-10-08 15:56:51.705359 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-10-08 15:56:51.705369 | orchestrator | Wednesday 08 October 2025 15:53:46 +0000 (0:00:00.737) 0:00:01.781 ***** 2025-10-08 15:56:51.705379 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-08 15:56:51.705390 | orchestrator | 2025-10-08 15:56:51.705400 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2025-10-08 15:56:51.705420 | orchestrator | Wednesday 08 October 2025 15:53:47 +0000 (0:00:01.607) 0:00:03.389 ***** 2025-10-08 15:56:51.705438 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-10-08 15:56:51.705451 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-10-08 15:56:51.705462 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-10-08 15:56:51.705473 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-10-08 15:56:51.705494 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-10-08 15:56:51.705506 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-10-08 15:56:51.705517 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-10-08 15:56:51.705541 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:56:51.705554 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:56:51.705565 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-10-08 15:56:51.705576 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:56:51.705588 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-10-08 15:56:51.705607 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-10-08 15:56:51.705620 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-10-08 15:56:51.705632 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:56:51.705659 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-10-08 15:56:51.705669 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:56:51.705680 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:56:51.705690 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-10-08 15:56:51.705700 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-10-08 15:56:51.705717 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-10-08 15:56:51.705729 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-10-08 15:56:51.705750 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-10-08 15:56:51.705760 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-10-08 15:56:51.705770 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:56:51.705780 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-10-08 15:56:51.705790 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:56:51.705806 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:56:51.705817 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:56:51.705833 | orchestrator | 2025-10-08 15:56:51.705843 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-10-08 15:56:51.705852 | orchestrator | Wednesday 08 October 2025 15:53:51 +0000 (0:00:03.642) 0:00:07.032 ***** 2025-10-08 15:56:51.705863 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-08 15:56:51.705873 | orchestrator | 2025-10-08 15:56:51.705882 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2025-10-08 15:56:51.705892 | orchestrator | Wednesday 08 October 2025 15:53:52 +0000 (0:00:01.424) 0:00:08.456 ***** 2025-10-08 15:56:51.705906 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-10-08 15:56:51.705917 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-10-08 15:56:51.705927 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-10-08 15:56:51.705937 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-10-08 15:56:51.705953 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-10-08 15:56:51.705963 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-10-08 15:56:51.705979 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-10-08 15:56:51.705989 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-10-08 15:56:51.706004 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:56:51.706042 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:56:51.706055 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:56:51.706065 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-10-08 15:56:51.706081 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-10-08 15:56:51.706098 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-10-08 15:56:51.706108 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-10-08 15:56:51.706171 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:56:51.706188 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:56:51.706214 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:56:51.706225 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-10-08 15:56:51.706235 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-10-08 15:56:51.706251 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-10-08 15:56:51.706266 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-10-08 15:56:51.706275 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-10-08 15:56:51.706288 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-10-08 15:56:51.706296 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-10-08 15:56:51.706304 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:56:51.706323 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:56:51.706343 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:56:51.706351 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:56:51.706359 | orchestrator | 2025-10-08 15:56:51.706368 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2025-10-08 15:56:51.706376 | orchestrator | Wednesday 08 October 2025 15:53:59 +0000 (0:00:06.597) 0:00:15.053 ***** 2025-10-08 15:56:51.706384 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-10-08 15:56:51.706397 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-10-08 15:56:51.706405 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-10-08 15:56:51.706414 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-10-08 15:56:51.706432 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-08 15:56:51.706441 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-10-08 15:56:51.706449 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-08 15:56:51.706458 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-08 15:56:51.706469 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-10-08 15:56:51.706477 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-08 15:56:51.706486 | orchestrator | skipping: [testbed-manager] 2025-10-08 15:56:51.706494 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-10-08 15:56:51.706555 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-08 15:56:51.706569 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-08 15:56:51.706602 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-10-08 15:56:51.706612 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-08 15:56:51.706620 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-10-08 15:56:51.706632 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-08 15:56:51.706641 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-08 15:56:51.706649 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-10-08 15:56:51.706663 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-08 15:56:51.706671 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:56:51.706695 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:56:51.706704 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:56:51.706718 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-10-08 15:56:51.706726 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-10-08 15:56:51.706735 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-10-08 15:56:51.706743 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:56:51.706755 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-10-08 15:56:51.706764 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-10-08 15:56:51.706772 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-10-08 15:56:51.706789 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:56:51.706797 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-10-08 15:56:51.706805 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-10-08 15:56:51.706851 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-10-08 15:56:51.706861 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:56:51.706869 | orchestrator | 2025-10-08 15:56:51.706877 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2025-10-08 15:56:51.706885 | orchestrator | Wednesday 08 October 2025 15:54:01 +0000 (0:00:01.615) 0:00:16.669 ***** 2025-10-08 15:56:51.706893 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-10-08 15:56:51.706906 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-10-08 15:56:51.706914 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-10-08 15:56:51.706928 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-10-08 15:56:51.706937 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-08 15:56:51.706945 | orchestrator | skipping: [testbed-manager] 2025-10-08 15:56:51.706959 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-10-08 15:56:51.706967 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-08 15:56:51.706976 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-10-08 15:56:51.706988 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-08 15:56:51.706996 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-08 15:56:51.707009 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-08 15:56:51.707017 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-10-08 15:56:51.707025 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-10-08 15:56:51.707038 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-08 15:56:51.707046 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-08 15:56:51.707054 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:56:51.707062 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:56:51.707071 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-10-08 15:56:51.707082 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-08 15:56:51.707097 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-08 15:56:51.707105 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-10-08 15:56:51.707113 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-08 15:56:51.707136 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:56:51.707149 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-10-08 15:56:51.707157 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-10-08 15:56:51.707166 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-10-08 15:56:51.707174 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:56:51.707182 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-10-08 15:56:51.707201 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-10-08 15:56:51.707209 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-10-08 15:56:51.707218 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:56:51.707226 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-10-08 15:56:51.707234 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-10-08 15:56:51.707676 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-10-08 15:56:51.707691 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:56:51.707700 | orchestrator | 2025-10-08 15:56:51.707708 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2025-10-08 15:56:51.707716 | orchestrator | Wednesday 08 October 2025 15:54:03 +0000 (0:00:01.924) 0:00:18.593 ***** 2025-10-08 15:56:51.707725 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-10-08 15:56:51.707733 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-10-08 15:56:51.707754 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-10-08 15:56:51.707763 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-10-08 15:56:51.707771 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-10-08 15:56:51.707779 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-10-08 15:56:51.707806 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:56:51.707815 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-10-08 15:56:51.707824 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:56:51.707841 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-10-08 15:56:51.707854 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:56:51.707862 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-10-08 15:56:51.707871 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-10-08 15:56:51.707879 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-10-08 15:56:51.707905 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:56:51.707914 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:56:51.707922 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-10-08 15:56:51.707994 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:56:51.708008 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-10-08 15:56:51.708068 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-10-08 15:56:51.708078 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-10-08 15:56:51.708105 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-10-08 15:56:51.708114 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-10-08 15:56:51.708299 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-10-08 15:56:51.708309 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-10-08 15:56:51.708321 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:56:51.708330 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:56:51.708338 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:56:51.708347 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:56:51.708355 | orchestrator | 2025-10-08 15:56:51.708363 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2025-10-08 15:56:51.708371 | orchestrator | Wednesday 08 October 2025 15:54:09 +0000 (0:00:05.911) 0:00:24.504 ***** 2025-10-08 15:56:51.708379 | orchestrator | ok: [testbed-manager -> localhost] 2025-10-08 15:56:51.708387 | orchestrator | 2025-10-08 15:56:51.708396 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2025-10-08 15:56:51.708466 | orchestrator | Wednesday 08 October 2025 15:54:10 +0000 (0:00:01.474) 0:00:25.979 ***** 2025-10-08 15:56:51.708478 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1093013, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8301928, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-08 15:56:51.708494 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1093013, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8301928, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-08 15:56:51.708502 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1093043, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8337884, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-08 15:56:51.708515 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1093043, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8337884, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-08 15:56:51.708524 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1093013, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8301928, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-08 15:56:51.708532 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1093013, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8301928, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-08 15:56:51.708563 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1093005, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8288486, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-08 15:56:51.708572 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1093005, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8288486, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-08 15:56:51.708586 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1093013, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8301928, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-08 15:56:51.708594 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1093013, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8301928, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-08 15:56:51.708606 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1093032, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.83262, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-08 15:56:51.708615 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1093013, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8301928, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-10-08 15:56:51.708623 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1093032, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.83262, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-08 15:56:51.708652 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1093043, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8337884, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-08 15:56:51.708667 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1093043, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8337884, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-08 15:56:51.708675 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1093043, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8337884, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-08 15:56:51.708684 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1092999, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8280137, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-08 15:56:51.708696 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1093043, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8337884, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-08 15:56:51.708704 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1093005, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8288486, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-08 15:56:51.708712 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1092999, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8280137, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-08 15:56:51.708741 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1093014, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8305182, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-08 15:56:51.708756 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1093005, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8288486, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-08 15:56:51.708764 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1093005, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8288486, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-08 15:56:51.708772 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1093043, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8337884, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-10-08 15:56:51.708784 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1093005, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8288486, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-08 15:56:51.708792 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1093032, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.83262, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-08 15:56:51.708800 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1093032, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.83262, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-08 15:56:51.708809 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1093014, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8305182, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-08 15:56:51.708843 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1093029, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8319044, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-08 15:56:51.708853 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1093032, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.83262, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-08 15:56:51.708861 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1092999, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8280137, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-08 15:56:51.708876 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1092999, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8280137, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-08 15:56:51.708885 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1093032, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.83262, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-08 15:56:51.708893 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1093014, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8305182, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-08 15:56:51.708907 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1093029, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8319044, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-08 15:56:51.708936 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1092999, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8280137, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-08 15:56:51.708945 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1093018, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8309422, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-08 15:56:51.708954 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1093014, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8305182, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-08 15:56:51.708966 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1092999, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8280137, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-08 15:56:51.708974 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1093029, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8319044, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-08 15:56:51.708982 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1093029, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8319044, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-08 15:56:51.708995 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1093014, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8305182, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-08 15:56:51.709024 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1093018, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8309422, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-08 15:56:51.709033 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1093014, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8305182, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-08 15:56:51.709041 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1093018, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8309422, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-08 15:56:51.709053 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1093011, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8299088, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-08 15:56:51.709061 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1093018, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8309422, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-08 15:56:51.709070 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1093005, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8288486, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-10-08 15:56:51.709084 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1093029, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8319044, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-08 15:56:51.709114 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1093011, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8299088, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-08 15:56:51.709175 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1093011, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8299088, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-08 15:56:51.709185 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1093029, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8319044, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-08 15:56:51.709198 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1093040, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8334844, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-08 15:56:51.709208 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1093011, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8299088, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-08 15:56:51.709217 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1093040, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8334844, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-08 15:56:51.709232 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1093040, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8334844, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-08 15:56:51.709266 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1093018, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8309422, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-08 15:56:51.709276 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1093018, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8309422, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-08 15:56:51.709284 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1092996, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8273451, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-08 15:56:51.709298 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1092996, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8273451, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-08 15:56:51.709307 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1093011, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8299088, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-08 15:56:51.709321 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1093032, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.83262, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-10-08 15:56:51.709331 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1093040, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8334844, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-08 15:56:51.709362 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1092996, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8273451, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-08 15:56:51.709372 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1093067, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8375807, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-08 15:56:51.709381 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1093011, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8299088, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-08 15:56:51.709394 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1093039, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8332155, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-08 15:56:51.709404 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1093067, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8375807, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-08 15:56:51.709419 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1093040, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8334844, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-08 15:56:51.709428 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1093002, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.828355, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-08 15:56:51.709459 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1093067, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8375807, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-08 15:56:51.709468 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1093040, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8334844, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-08 15:56:51.709476 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1092996, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8273451, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-08 15:56:51.709490 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1093039, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8332155, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-08 15:56:51.709498 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1092996, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8273451, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-08 15:56:51.709512 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1092999, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8280137, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-10-08 15:56:51.709521 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1093039, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8332155, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-08 15:56:51.709533 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1093002, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.828355, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-08 15:56:51.709541 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1092996, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8273451, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-08 15:56:51.709549 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1092998, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.827758, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-08 15:56:51.709562 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1093067, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8375807, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-08 15:56:51.709579 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1093002, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.828355, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-08 15:56:51.709587 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1093067, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8375807, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-08 15:56:51.709596 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1093023, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8316164, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-08 15:56:51.709610 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1092998, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.827758, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-08 15:56:51.709619 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1093067, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8375807, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-08 15:56:51.709627 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1092998, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.827758, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-08 15:56:51.709639 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1093014, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8305182, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-10-08 15:56:51.709654 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1093039, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8332155, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-08 15:56:51.709663 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1093039, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8332155, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-08 15:56:51.709671 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1093039, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8332155, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-08 15:56:51.709683 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1093021, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.831255, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-08 15:56:51.709690 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1093023, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8316164, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-08 15:56:51.709697 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1093023, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8316164, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-08 15:56:51.709707 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1093002, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.828355, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-08 15:56:51.709719 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1093002, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.828355, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-08 15:56:51.709726 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1093002, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.828355, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-08 15:56:51.709733 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1093064, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8371437, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-08 15:56:51.709740 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:56:51.709752 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1093021, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.831255, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-08 15:56:51.709759 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1093021, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.831255, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-08 15:56:51.709766 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1092998, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.827758, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-08 15:56:51.709780 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1092998, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.827758, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-08 15:56:51.709787 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1093029, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8319044, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-10-08 15:56:51.709794 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1092998, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.827758, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-08 15:56:51.709801 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1093064, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8371437, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-08 15:56:51.709808 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:56:51.709818 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1093064, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8371437, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-08 15:56:51.709825 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:56:51.709832 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1093023, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8316164, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-08 15:56:51.709839 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1093023, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8316164, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-08 15:56:51.709854 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1093023, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8316164, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-08 15:56:51.709861 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1093021, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.831255, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-08 15:56:51.709868 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1093021, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.831255, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-08 15:56:51.709875 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1093064, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8371437, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-08 15:56:51.709881 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:56:51.709891 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1093021, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.831255, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-08 15:56:51.709899 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1093064, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8371437, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-08 15:56:51.709906 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:56:51.709917 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1093018, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8309422, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-10-08 15:56:51.709927 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1093064, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8371437, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-08 15:56:51.709934 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:56:51.709941 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1093011, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8299088, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-10-08 15:56:51.709948 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1093040, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8334844, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-10-08 15:56:51.709955 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1092996, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8273451, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-10-08 15:56:51.709965 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1093067, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8375807, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-10-08 15:56:51.709972 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1093039, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8332155, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-10-08 15:56:51.709983 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1093002, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.828355, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-10-08 15:56:51.709994 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1092998, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.827758, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-10-08 15:56:51.710001 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1093023, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8316164, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-10-08 15:56:51.710008 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1093021, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.831255, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-10-08 15:56:51.710037 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1093064, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8371437, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-10-08 15:56:51.710045 | orchestrator | 2025-10-08 15:56:51.710052 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2025-10-08 15:56:51.710058 | orchestrator | Wednesday 08 October 2025 15:54:38 +0000 (0:00:27.985) 0:00:53.964 ***** 2025-10-08 15:56:51.710065 | orchestrator | ok: [testbed-manager -> localhost] 2025-10-08 15:56:51.710072 | orchestrator | 2025-10-08 15:56:51.710083 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2025-10-08 15:56:51.710090 | orchestrator | Wednesday 08 October 2025 15:54:39 +0000 (0:00:00.657) 0:00:54.622 ***** 2025-10-08 15:56:51.710097 | orchestrator | [WARNING]: Skipped 2025-10-08 15:56:51.710104 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-10-08 15:56:51.710111 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2025-10-08 15:56:51.710136 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-10-08 15:56:51.710144 | orchestrator | manager/prometheus.yml.d' is not a directory 2025-10-08 15:56:51.710150 | orchestrator | ok: [testbed-manager -> localhost] 2025-10-08 15:56:51.710157 | orchestrator | [WARNING]: Skipped 2025-10-08 15:56:51.710164 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-10-08 15:56:51.710170 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2025-10-08 15:56:51.710177 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-10-08 15:56:51.710184 | orchestrator | node-1/prometheus.yml.d' is not a directory 2025-10-08 15:56:51.710190 | orchestrator | [WARNING]: Skipped 2025-10-08 15:56:51.710197 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-10-08 15:56:51.710204 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2025-10-08 15:56:51.710210 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-10-08 15:56:51.710217 | orchestrator | node-0/prometheus.yml.d' is not a directory 2025-10-08 15:56:51.710224 | orchestrator | [WARNING]: Skipped 2025-10-08 15:56:51.710230 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-10-08 15:56:51.710237 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2025-10-08 15:56:51.710243 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-10-08 15:56:51.710250 | orchestrator | node-2/prometheus.yml.d' is not a directory 2025-10-08 15:56:51.710257 | orchestrator | [WARNING]: Skipped 2025-10-08 15:56:51.710263 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-10-08 15:56:51.710271 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2025-10-08 15:56:51.710281 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-10-08 15:56:51.710291 | orchestrator | node-3/prometheus.yml.d' is not a directory 2025-10-08 15:56:51.710302 | orchestrator | [WARNING]: Skipped 2025-10-08 15:56:51.710317 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-10-08 15:56:51.710327 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2025-10-08 15:56:51.710337 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-10-08 15:56:51.710347 | orchestrator | node-4/prometheus.yml.d' is not a directory 2025-10-08 15:56:51.710358 | orchestrator | [WARNING]: Skipped 2025-10-08 15:56:51.710368 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-10-08 15:56:51.710379 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2025-10-08 15:56:51.710390 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-10-08 15:56:51.710401 | orchestrator | node-5/prometheus.yml.d' is not a directory 2025-10-08 15:56:51.710411 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-10-08 15:56:51.710423 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-10-08 15:56:51.710432 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-10-08 15:56:51.710439 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-10-08 15:56:51.710446 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-10-08 15:56:51.710452 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-10-08 15:56:51.710459 | orchestrator | 2025-10-08 15:56:51.710466 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2025-10-08 15:56:51.710473 | orchestrator | Wednesday 08 October 2025 15:54:40 +0000 (0:00:01.783) 0:00:56.405 ***** 2025-10-08 15:56:51.710479 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-10-08 15:56:51.710486 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:56:51.710493 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-10-08 15:56:51.710508 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:56:51.710515 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-10-08 15:56:51.710522 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:56:51.710528 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-10-08 15:56:51.710535 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:56:51.710542 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-10-08 15:56:51.710548 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:56:51.710555 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-10-08 15:56:51.710561 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:56:51.710568 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2025-10-08 15:56:51.710575 | orchestrator | 2025-10-08 15:56:51.710581 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2025-10-08 15:56:51.710588 | orchestrator | Wednesday 08 October 2025 15:54:57 +0000 (0:00:16.259) 0:01:12.664 ***** 2025-10-08 15:56:51.710595 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-10-08 15:56:51.710606 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:56:51.710613 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-10-08 15:56:51.710620 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:56:51.710627 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-10-08 15:56:51.710633 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:56:51.710640 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-10-08 15:56:51.710647 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:56:51.710653 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-10-08 15:56:51.710660 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:56:51.710667 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-10-08 15:56:51.710673 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:56:51.710680 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2025-10-08 15:56:51.710687 | orchestrator | 2025-10-08 15:56:51.710693 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2025-10-08 15:56:51.710700 | orchestrator | Wednesday 08 October 2025 15:55:01 +0000 (0:00:03.965) 0:01:16.630 ***** 2025-10-08 15:56:51.710707 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-10-08 15:56:51.710714 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-10-08 15:56:51.710721 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-10-08 15:56:51.710727 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:56:51.710734 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:56:51.710741 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:56:51.710747 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-10-08 15:56:51.710754 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:56:51.710765 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-10-08 15:56:51.710772 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:56:51.710779 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-10-08 15:56:51.710790 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:56:51.710797 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2025-10-08 15:56:51.710804 | orchestrator | 2025-10-08 15:56:51.710812 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2025-10-08 15:56:51.710823 | orchestrator | Wednesday 08 October 2025 15:55:03 +0000 (0:00:02.285) 0:01:18.916 ***** 2025-10-08 15:56:51.710834 | orchestrator | ok: [testbed-manager -> localhost] 2025-10-08 15:56:51.710845 | orchestrator | 2025-10-08 15:56:51.710856 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2025-10-08 15:56:51.710867 | orchestrator | Wednesday 08 October 2025 15:55:04 +0000 (0:00:00.846) 0:01:19.762 ***** 2025-10-08 15:56:51.710878 | orchestrator | skipping: [testbed-manager] 2025-10-08 15:56:51.710884 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:56:51.710903 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:56:51.710910 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:56:51.710916 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:56:51.710923 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:56:51.710937 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:56:51.710944 | orchestrator | 2025-10-08 15:56:51.710951 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2025-10-08 15:56:51.710958 | orchestrator | Wednesday 08 October 2025 15:55:05 +0000 (0:00:00.774) 0:01:20.536 ***** 2025-10-08 15:56:51.710964 | orchestrator | skipping: [testbed-manager] 2025-10-08 15:56:51.710971 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:56:51.710977 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:56:51.710984 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:56:51.710991 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:56:51.710997 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:56:51.711004 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:56:51.711011 | orchestrator | 2025-10-08 15:56:51.711017 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2025-10-08 15:56:51.711024 | orchestrator | Wednesday 08 October 2025 15:55:07 +0000 (0:00:02.912) 0:01:23.449 ***** 2025-10-08 15:56:51.711031 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-10-08 15:56:51.711037 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:56:51.711044 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-10-08 15:56:51.711051 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-10-08 15:56:51.711058 | orchestrator | skipping: [testbed-manager] 2025-10-08 15:56:51.711064 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:56:51.711071 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-10-08 15:56:51.711078 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:56:51.711089 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-10-08 15:56:51.711096 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:56:51.711102 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-10-08 15:56:51.711109 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:56:51.711116 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-10-08 15:56:51.711141 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:56:51.711148 | orchestrator | 2025-10-08 15:56:51.711155 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2025-10-08 15:56:51.711162 | orchestrator | Wednesday 08 October 2025 15:55:10 +0000 (0:00:02.062) 0:01:25.511 ***** 2025-10-08 15:56:51.711168 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-10-08 15:56:51.711181 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-10-08 15:56:51.711188 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:56:51.711194 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:56:51.711201 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-10-08 15:56:51.711207 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:56:51.711214 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-10-08 15:56:51.711221 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:56:51.711227 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-10-08 15:56:51.711234 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:56:51.711241 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-10-08 15:56:51.711247 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:56:51.711254 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2025-10-08 15:56:51.711261 | orchestrator | 2025-10-08 15:56:51.711267 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2025-10-08 15:56:51.711274 | orchestrator | Wednesday 08 October 2025 15:55:11 +0000 (0:00:01.626) 0:01:27.137 ***** 2025-10-08 15:56:51.711280 | orchestrator | [WARNING]: Skipped 2025-10-08 15:56:51.711291 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2025-10-08 15:56:51.711298 | orchestrator | due to this access issue: 2025-10-08 15:56:51.711304 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2025-10-08 15:56:51.711311 | orchestrator | not a directory 2025-10-08 15:56:51.711318 | orchestrator | ok: [testbed-manager -> localhost] 2025-10-08 15:56:51.711324 | orchestrator | 2025-10-08 15:56:51.711331 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2025-10-08 15:56:51.711338 | orchestrator | Wednesday 08 October 2025 15:55:12 +0000 (0:00:01.206) 0:01:28.343 ***** 2025-10-08 15:56:51.711344 | orchestrator | skipping: [testbed-manager] 2025-10-08 15:56:51.711351 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:56:51.711358 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:56:51.711364 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:56:51.711371 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:56:51.711377 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:56:51.711384 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:56:51.711391 | orchestrator | 2025-10-08 15:56:51.711397 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2025-10-08 15:56:51.711404 | orchestrator | Wednesday 08 October 2025 15:55:14 +0000 (0:00:01.217) 0:01:29.561 ***** 2025-10-08 15:56:51.711411 | orchestrator | skipping: [testbed-manager] 2025-10-08 15:56:51.711422 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:56:51.711433 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:56:51.711444 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:56:51.711455 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:56:51.711465 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:56:51.711476 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:56:51.711488 | orchestrator | 2025-10-08 15:56:51.711499 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2025-10-08 15:56:51.711510 | orchestrator | Wednesday 08 October 2025 15:55:14 +0000 (0:00:00.814) 0:01:30.376 ***** 2025-10-08 15:56:51.711522 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-10-08 15:56:51.711543 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-10-08 15:56:51.711560 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-10-08 15:56:51.711568 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-10-08 15:56:51.711575 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-10-08 15:56:51.711587 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-10-08 15:56:51.711594 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-10-08 15:56:51.711601 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:56:51.711613 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:56:51.711623 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:56:51.711631 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-10-08 15:56:51.711638 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-10-08 15:56:51.711645 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-10-08 15:56:51.711655 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-10-08 15:56:51.711662 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-10-08 15:56:51.711669 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:56:51.711682 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:56:51.711692 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:56:51.711700 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-10-08 15:56:51.711707 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-10-08 15:56:51.711714 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-10-08 15:56:51.711721 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-10-08 15:56:51.711728 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-10-08 15:56:51.711739 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-10-08 15:56:51.711751 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-10-08 15:56:51.711759 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:56:51.711766 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:56:51.711806 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:56:51.711814 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-08 15:56:51.711821 | orchestrator | 2025-10-08 15:56:51.711828 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2025-10-08 15:56:51.711835 | orchestrator | Wednesday 08 October 2025 15:55:19 +0000 (0:00:04.406) 0:01:34.782 ***** 2025-10-08 15:56:51.711846 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-10-08 15:56:51.711853 | orchestrator | skipping: [testbed-manager] 2025-10-08 15:56:51.711860 | orchestrator | 2025-10-08 15:56:51.711866 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-10-08 15:56:51.711873 | orchestrator | Wednesday 08 October 2025 15:55:20 +0000 (0:00:01.437) 0:01:36.220 ***** 2025-10-08 15:56:51.711879 | orchestrator | 2025-10-08 15:56:51.711886 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-10-08 15:56:51.711892 | orchestrator | Wednesday 08 October 2025 15:55:20 +0000 (0:00:00.087) 0:01:36.307 ***** 2025-10-08 15:56:51.711899 | orchestrator | 2025-10-08 15:56:51.711906 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-10-08 15:56:51.711912 | orchestrator | Wednesday 08 October 2025 15:55:20 +0000 (0:00:00.122) 0:01:36.430 ***** 2025-10-08 15:56:51.711919 | orchestrator | 2025-10-08 15:56:51.711925 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-10-08 15:56:51.711932 | orchestrator | Wednesday 08 October 2025 15:55:21 +0000 (0:00:00.143) 0:01:36.573 ***** 2025-10-08 15:56:51.711938 | orchestrator | 2025-10-08 15:56:51.711945 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-10-08 15:56:51.711952 | orchestrator | Wednesday 08 October 2025 15:55:21 +0000 (0:00:00.391) 0:01:36.965 ***** 2025-10-08 15:56:51.711958 | orchestrator | 2025-10-08 15:56:51.711965 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-10-08 15:56:51.711972 | orchestrator | Wednesday 08 October 2025 15:55:21 +0000 (0:00:00.145) 0:01:37.111 ***** 2025-10-08 15:56:51.711978 | orchestrator | 2025-10-08 15:56:51.711985 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-10-08 15:56:51.711992 | orchestrator | Wednesday 08 October 2025 15:55:21 +0000 (0:00:00.175) 0:01:37.286 ***** 2025-10-08 15:56:51.711998 | orchestrator | 2025-10-08 15:56:51.712005 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2025-10-08 15:56:51.712011 | orchestrator | Wednesday 08 October 2025 15:55:21 +0000 (0:00:00.174) 0:01:37.461 ***** 2025-10-08 15:56:51.712018 | orchestrator | changed: [testbed-manager] 2025-10-08 15:56:51.712025 | orchestrator | 2025-10-08 15:56:51.712031 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2025-10-08 15:56:51.712041 | orchestrator | Wednesday 08 October 2025 15:55:34 +0000 (0:00:13.025) 0:01:50.487 ***** 2025-10-08 15:56:51.712048 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:56:51.712055 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:56:51.712061 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:56:51.712068 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:56:51.712075 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:56:51.712081 | orchestrator | changed: [testbed-manager] 2025-10-08 15:56:51.712088 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:56:51.712094 | orchestrator | 2025-10-08 15:56:51.712101 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2025-10-08 15:56:51.712107 | orchestrator | Wednesday 08 October 2025 15:55:49 +0000 (0:00:14.160) 0:02:04.647 ***** 2025-10-08 15:56:51.712114 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:56:51.712139 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:56:51.712146 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:56:51.712152 | orchestrator | 2025-10-08 15:56:51.712159 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2025-10-08 15:56:51.712166 | orchestrator | Wednesday 08 October 2025 15:55:54 +0000 (0:00:05.414) 0:02:10.062 ***** 2025-10-08 15:56:51.712173 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:56:51.712180 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:56:51.712186 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:56:51.712193 | orchestrator | 2025-10-08 15:56:51.712200 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2025-10-08 15:56:51.712206 | orchestrator | Wednesday 08 October 2025 15:56:04 +0000 (0:00:10.023) 0:02:20.086 ***** 2025-10-08 15:56:51.712218 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:56:51.712225 | orchestrator | changed: [testbed-manager] 2025-10-08 15:56:51.712231 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:56:51.712238 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:56:51.712245 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:56:51.712251 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:56:51.712258 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:56:51.712265 | orchestrator | 2025-10-08 15:56:51.712272 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2025-10-08 15:56:51.712278 | orchestrator | Wednesday 08 October 2025 15:56:13 +0000 (0:00:09.000) 0:02:29.086 ***** 2025-10-08 15:56:51.712285 | orchestrator | changed: [testbed-manager] 2025-10-08 15:56:51.712292 | orchestrator | 2025-10-08 15:56:51.712298 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2025-10-08 15:56:51.712305 | orchestrator | Wednesday 08 October 2025 15:56:22 +0000 (0:00:09.323) 0:02:38.410 ***** 2025-10-08 15:56:51.712312 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:56:51.712318 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:56:51.712325 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:56:51.712332 | orchestrator | 2025-10-08 15:56:51.712338 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2025-10-08 15:56:51.712351 | orchestrator | Wednesday 08 October 2025 15:56:33 +0000 (0:00:10.811) 0:02:49.221 ***** 2025-10-08 15:56:51.712358 | orchestrator | changed: [testbed-manager] 2025-10-08 15:56:51.712365 | orchestrator | 2025-10-08 15:56:51.712371 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2025-10-08 15:56:51.712378 | orchestrator | Wednesday 08 October 2025 15:56:38 +0000 (0:00:05.207) 0:02:54.429 ***** 2025-10-08 15:56:51.712385 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:56:51.712392 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:56:51.712398 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:56:51.712405 | orchestrator | 2025-10-08 15:56:51.712412 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-08 15:56:51.712419 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-10-08 15:56:51.712426 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-10-08 15:56:51.712433 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-10-08 15:56:51.712440 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-10-08 15:56:51.712447 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-10-08 15:56:51.712453 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-10-08 15:56:51.712460 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-10-08 15:56:51.712467 | orchestrator | 2025-10-08 15:56:51.712474 | orchestrator | 2025-10-08 15:56:51.712480 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-08 15:56:51.712487 | orchestrator | Wednesday 08 October 2025 15:56:49 +0000 (0:00:10.922) 0:03:05.351 ***** 2025-10-08 15:56:51.712494 | orchestrator | =============================================================================== 2025-10-08 15:56:51.712500 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 27.99s 2025-10-08 15:56:51.712507 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 16.26s 2025-10-08 15:56:51.712518 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 14.16s 2025-10-08 15:56:51.712525 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 13.03s 2025-10-08 15:56:51.712532 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container ------------- 10.92s 2025-10-08 15:56:51.712542 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container ------- 10.81s 2025-10-08 15:56:51.712549 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ----------- 10.02s 2025-10-08 15:56:51.712557 | orchestrator | prometheus : Restart prometheus-alertmanager container ------------------ 9.32s 2025-10-08 15:56:51.712568 | orchestrator | prometheus : Restart prometheus-cadvisor container ---------------------- 9.00s 2025-10-08 15:56:51.712579 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 6.60s 2025-10-08 15:56:51.712591 | orchestrator | prometheus : Copying over config.json files ----------------------------- 5.91s 2025-10-08 15:56:51.712601 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container --------------- 5.41s 2025-10-08 15:56:51.712613 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 5.21s 2025-10-08 15:56:51.712624 | orchestrator | prometheus : Check prometheus containers -------------------------------- 4.41s 2025-10-08 15:56:51.712634 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 3.97s 2025-10-08 15:56:51.712645 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 3.64s 2025-10-08 15:56:51.712657 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 2.91s 2025-10-08 15:56:51.712668 | orchestrator | prometheus : Copying over prometheus alertmanager config file ----------- 2.29s 2025-10-08 15:56:51.712676 | orchestrator | prometheus : Copying cloud config file for openstack exporter ----------- 2.06s 2025-10-08 15:56:51.712683 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS key --- 1.92s 2025-10-08 15:56:51.712689 | orchestrator | 2025-10-08 15:56:51 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 15:56:51.712696 | orchestrator | 2025-10-08 15:56:51 | INFO  | Task 11ec0c4c-b55f-49e3-9b7e-291373ba2e0e is in state STARTED 2025-10-08 15:56:51.712703 | orchestrator | 2025-10-08 15:56:51 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:56:54.756352 | orchestrator | 2025-10-08 15:56:54 | INFO  | Task af272394-32d4-40a7-b33d-073343bea7a9 is in state STARTED 2025-10-08 15:56:54.758102 | orchestrator | 2025-10-08 15:56:54 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 15:56:54.759995 | orchestrator | 2025-10-08 15:56:54 | INFO  | Task 4988d366-0862-4257-b730-a1af7a0acc5a is in state STARTED 2025-10-08 15:56:54.762067 | orchestrator | 2025-10-08 15:56:54 | INFO  | Task 11ec0c4c-b55f-49e3-9b7e-291373ba2e0e is in state STARTED 2025-10-08 15:56:54.762488 | orchestrator | 2025-10-08 15:56:54 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:56:57.797199 | orchestrator | 2025-10-08 15:56:57 | INFO  | Task af272394-32d4-40a7-b33d-073343bea7a9 is in state STARTED 2025-10-08 15:56:57.798290 | orchestrator | 2025-10-08 15:56:57 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 15:56:57.800302 | orchestrator | 2025-10-08 15:56:57 | INFO  | Task 4988d366-0862-4257-b730-a1af7a0acc5a is in state STARTED 2025-10-08 15:56:57.801463 | orchestrator | 2025-10-08 15:56:57 | INFO  | Task 11ec0c4c-b55f-49e3-9b7e-291373ba2e0e is in state STARTED 2025-10-08 15:56:57.801502 | orchestrator | 2025-10-08 15:56:57 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:57:00.846707 | orchestrator | 2025-10-08 15:57:00 | INFO  | Task af272394-32d4-40a7-b33d-073343bea7a9 is in state STARTED 2025-10-08 15:57:00.848467 | orchestrator | 2025-10-08 15:57:00 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 15:57:00.850791 | orchestrator | 2025-10-08 15:57:00 | INFO  | Task 4988d366-0862-4257-b730-a1af7a0acc5a is in state STARTED 2025-10-08 15:57:00.852591 | orchestrator | 2025-10-08 15:57:00 | INFO  | Task 11ec0c4c-b55f-49e3-9b7e-291373ba2e0e is in state STARTED 2025-10-08 15:57:00.852908 | orchestrator | 2025-10-08 15:57:00 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:57:03.896914 | orchestrator | 2025-10-08 15:57:03 | INFO  | Task af272394-32d4-40a7-b33d-073343bea7a9 is in state STARTED 2025-10-08 15:57:03.899741 | orchestrator | 2025-10-08 15:57:03 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 15:57:03.902174 | orchestrator | 2025-10-08 15:57:03 | INFO  | Task 4988d366-0862-4257-b730-a1af7a0acc5a is in state STARTED 2025-10-08 15:57:03.904949 | orchestrator | 2025-10-08 15:57:03 | INFO  | Task 11ec0c4c-b55f-49e3-9b7e-291373ba2e0e is in state STARTED 2025-10-08 15:57:03.905342 | orchestrator | 2025-10-08 15:57:03 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:57:06.953253 | orchestrator | 2025-10-08 15:57:06 | INFO  | Task af272394-32d4-40a7-b33d-073343bea7a9 is in state STARTED 2025-10-08 15:57:06.955426 | orchestrator | 2025-10-08 15:57:06 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 15:57:06.956953 | orchestrator | 2025-10-08 15:57:06 | INFO  | Task 4988d366-0862-4257-b730-a1af7a0acc5a is in state STARTED 2025-10-08 15:57:06.959158 | orchestrator | 2025-10-08 15:57:06 | INFO  | Task 11ec0c4c-b55f-49e3-9b7e-291373ba2e0e is in state STARTED 2025-10-08 15:57:06.959185 | orchestrator | 2025-10-08 15:57:06 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:57:10.011849 | orchestrator | 2025-10-08 15:57:10 | INFO  | Task af272394-32d4-40a7-b33d-073343bea7a9 is in state STARTED 2025-10-08 15:57:10.013925 | orchestrator | 2025-10-08 15:57:10 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 15:57:10.017032 | orchestrator | 2025-10-08 15:57:10 | INFO  | Task 4988d366-0862-4257-b730-a1af7a0acc5a is in state STARTED 2025-10-08 15:57:10.019531 | orchestrator | 2025-10-08 15:57:10 | INFO  | Task 11ec0c4c-b55f-49e3-9b7e-291373ba2e0e is in state STARTED 2025-10-08 15:57:10.019556 | orchestrator | 2025-10-08 15:57:10 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:57:13.087306 | orchestrator | 2025-10-08 15:57:13 | INFO  | Task af272394-32d4-40a7-b33d-073343bea7a9 is in state STARTED 2025-10-08 15:57:13.089375 | orchestrator | 2025-10-08 15:57:13 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 15:57:13.091234 | orchestrator | 2025-10-08 15:57:13 | INFO  | Task 4988d366-0862-4257-b730-a1af7a0acc5a is in state STARTED 2025-10-08 15:57:13.093044 | orchestrator | 2025-10-08 15:57:13 | INFO  | Task 11ec0c4c-b55f-49e3-9b7e-291373ba2e0e is in state STARTED 2025-10-08 15:57:13.093069 | orchestrator | 2025-10-08 15:57:13 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:57:16.139808 | orchestrator | 2025-10-08 15:57:16 | INFO  | Task af272394-32d4-40a7-b33d-073343bea7a9 is in state STARTED 2025-10-08 15:57:16.141424 | orchestrator | 2025-10-08 15:57:16 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 15:57:16.143643 | orchestrator | 2025-10-08 15:57:16 | INFO  | Task 4988d366-0862-4257-b730-a1af7a0acc5a is in state STARTED 2025-10-08 15:57:16.146303 | orchestrator | 2025-10-08 15:57:16 | INFO  | Task 11ec0c4c-b55f-49e3-9b7e-291373ba2e0e is in state STARTED 2025-10-08 15:57:16.146358 | orchestrator | 2025-10-08 15:57:16 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:57:19.200043 | orchestrator | 2025-10-08 15:57:19 | INFO  | Task af272394-32d4-40a7-b33d-073343bea7a9 is in state STARTED 2025-10-08 15:57:19.203338 | orchestrator | 2025-10-08 15:57:19 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 15:57:19.205593 | orchestrator | 2025-10-08 15:57:19 | INFO  | Task 4988d366-0862-4257-b730-a1af7a0acc5a is in state STARTED 2025-10-08 15:57:19.207464 | orchestrator | 2025-10-08 15:57:19 | INFO  | Task 11ec0c4c-b55f-49e3-9b7e-291373ba2e0e is in state STARTED 2025-10-08 15:57:19.207778 | orchestrator | 2025-10-08 15:57:19 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:57:22.254319 | orchestrator | 2025-10-08 15:57:22 | INFO  | Task af272394-32d4-40a7-b33d-073343bea7a9 is in state STARTED 2025-10-08 15:57:22.255210 | orchestrator | 2025-10-08 15:57:22 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 15:57:22.256797 | orchestrator | 2025-10-08 15:57:22 | INFO  | Task 4988d366-0862-4257-b730-a1af7a0acc5a is in state STARTED 2025-10-08 15:57:22.258471 | orchestrator | 2025-10-08 15:57:22 | INFO  | Task 11ec0c4c-b55f-49e3-9b7e-291373ba2e0e is in state STARTED 2025-10-08 15:57:22.258501 | orchestrator | 2025-10-08 15:57:22 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:57:25.305418 | orchestrator | 2025-10-08 15:57:25 | INFO  | Task af272394-32d4-40a7-b33d-073343bea7a9 is in state STARTED 2025-10-08 15:57:25.305525 | orchestrator | 2025-10-08 15:57:25 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 15:57:25.309958 | orchestrator | 2025-10-08 15:57:25 | INFO  | Task 4988d366-0862-4257-b730-a1af7a0acc5a is in state STARTED 2025-10-08 15:57:25.312344 | orchestrator | 2025-10-08 15:57:25 | INFO  | Task 11ec0c4c-b55f-49e3-9b7e-291373ba2e0e is in state STARTED 2025-10-08 15:57:25.312648 | orchestrator | 2025-10-08 15:57:25 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:57:28.361632 | orchestrator | 2025-10-08 15:57:28 | INFO  | Task af272394-32d4-40a7-b33d-073343bea7a9 is in state STARTED 2025-10-08 15:57:28.363669 | orchestrator | 2025-10-08 15:57:28 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 15:57:28.366492 | orchestrator | 2025-10-08 15:57:28 | INFO  | Task 4988d366-0862-4257-b730-a1af7a0acc5a is in state STARTED 2025-10-08 15:57:28.368872 | orchestrator | 2025-10-08 15:57:28 | INFO  | Task 11ec0c4c-b55f-49e3-9b7e-291373ba2e0e is in state STARTED 2025-10-08 15:57:28.368896 | orchestrator | 2025-10-08 15:57:28 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:57:31.420275 | orchestrator | 2025-10-08 15:57:31 | INFO  | Task af272394-32d4-40a7-b33d-073343bea7a9 is in state STARTED 2025-10-08 15:57:31.422952 | orchestrator | 2025-10-08 15:57:31 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 15:57:31.427242 | orchestrator | 2025-10-08 15:57:31 | INFO  | Task 4988d366-0862-4257-b730-a1af7a0acc5a is in state STARTED 2025-10-08 15:57:31.428301 | orchestrator | 2025-10-08 15:57:31 | INFO  | Task 11ec0c4c-b55f-49e3-9b7e-291373ba2e0e is in state STARTED 2025-10-08 15:57:31.428323 | orchestrator | 2025-10-08 15:57:31 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:57:34.465877 | orchestrator | 2025-10-08 15:57:34 | INFO  | Task af272394-32d4-40a7-b33d-073343bea7a9 is in state STARTED 2025-10-08 15:57:34.466380 | orchestrator | 2025-10-08 15:57:34 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 15:57:34.467451 | orchestrator | 2025-10-08 15:57:34 | INFO  | Task 4988d366-0862-4257-b730-a1af7a0acc5a is in state STARTED 2025-10-08 15:57:34.468827 | orchestrator | 2025-10-08 15:57:34 | INFO  | Task 11ec0c4c-b55f-49e3-9b7e-291373ba2e0e is in state STARTED 2025-10-08 15:57:34.468876 | orchestrator | 2025-10-08 15:57:34 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:57:37.518368 | orchestrator | 2025-10-08 15:57:37 | INFO  | Task af272394-32d4-40a7-b33d-073343bea7a9 is in state STARTED 2025-10-08 15:57:37.520364 | orchestrator | 2025-10-08 15:57:37 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 15:57:37.522802 | orchestrator | 2025-10-08 15:57:37 | INFO  | Task 4988d366-0862-4257-b730-a1af7a0acc5a is in state STARTED 2025-10-08 15:57:37.525222 | orchestrator | 2025-10-08 15:57:37 | INFO  | Task 11ec0c4c-b55f-49e3-9b7e-291373ba2e0e is in state STARTED 2025-10-08 15:57:37.525599 | orchestrator | 2025-10-08 15:57:37 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:57:40.565852 | orchestrator | 2025-10-08 15:57:40 | INFO  | Task af272394-32d4-40a7-b33d-073343bea7a9 is in state STARTED 2025-10-08 15:57:40.566239 | orchestrator | 2025-10-08 15:57:40 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 15:57:40.567380 | orchestrator | 2025-10-08 15:57:40 | INFO  | Task 4988d366-0862-4257-b730-a1af7a0acc5a is in state STARTED 2025-10-08 15:57:40.568427 | orchestrator | 2025-10-08 15:57:40 | INFO  | Task 11ec0c4c-b55f-49e3-9b7e-291373ba2e0e is in state STARTED 2025-10-08 15:57:40.568450 | orchestrator | 2025-10-08 15:57:40 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:57:43.616300 | orchestrator | 2025-10-08 15:57:43 | INFO  | Task af272394-32d4-40a7-b33d-073343bea7a9 is in state STARTED 2025-10-08 15:57:43.617106 | orchestrator | 2025-10-08 15:57:43 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 15:57:43.618376 | orchestrator | 2025-10-08 15:57:43 | INFO  | Task 4988d366-0862-4257-b730-a1af7a0acc5a is in state STARTED 2025-10-08 15:57:43.619228 | orchestrator | 2025-10-08 15:57:43 | INFO  | Task 11ec0c4c-b55f-49e3-9b7e-291373ba2e0e is in state STARTED 2025-10-08 15:57:43.619252 | orchestrator | 2025-10-08 15:57:43 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:57:46.711423 | orchestrator | 2025-10-08 15:57:46 | INFO  | Task af272394-32d4-40a7-b33d-073343bea7a9 is in state STARTED 2025-10-08 15:57:46.711754 | orchestrator | 2025-10-08 15:57:46 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 15:57:46.712487 | orchestrator | 2025-10-08 15:57:46 | INFO  | Task 4988d366-0862-4257-b730-a1af7a0acc5a is in state STARTED 2025-10-08 15:57:46.713387 | orchestrator | 2025-10-08 15:57:46 | INFO  | Task 11ec0c4c-b55f-49e3-9b7e-291373ba2e0e is in state STARTED 2025-10-08 15:57:46.713411 | orchestrator | 2025-10-08 15:57:46 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:57:49.758480 | orchestrator | 2025-10-08 15:57:49 | INFO  | Task af272394-32d4-40a7-b33d-073343bea7a9 is in state STARTED 2025-10-08 15:57:49.758856 | orchestrator | 2025-10-08 15:57:49 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 15:57:49.759758 | orchestrator | 2025-10-08 15:57:49 | INFO  | Task 4988d366-0862-4257-b730-a1af7a0acc5a is in state STARTED 2025-10-08 15:57:49.760557 | orchestrator | 2025-10-08 15:57:49 | INFO  | Task 11ec0c4c-b55f-49e3-9b7e-291373ba2e0e is in state STARTED 2025-10-08 15:57:49.760579 | orchestrator | 2025-10-08 15:57:49 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:57:52.799601 | orchestrator | 2025-10-08 15:57:52 | INFO  | Task af272394-32d4-40a7-b33d-073343bea7a9 is in state STARTED 2025-10-08 15:57:52.799739 | orchestrator | 2025-10-08 15:57:52 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 15:57:52.801834 | orchestrator | 2025-10-08 15:57:52 | INFO  | Task 4988d366-0862-4257-b730-a1af7a0acc5a is in state STARTED 2025-10-08 15:57:52.802547 | orchestrator | 2025-10-08 15:57:52 | INFO  | Task 11ec0c4c-b55f-49e3-9b7e-291373ba2e0e is in state STARTED 2025-10-08 15:57:52.802572 | orchestrator | 2025-10-08 15:57:52 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:57:55.856490 | orchestrator | 2025-10-08 15:57:55 | INFO  | Task af272394-32d4-40a7-b33d-073343bea7a9 is in state STARTED 2025-10-08 15:57:55.856615 | orchestrator | 2025-10-08 15:57:55 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 15:57:55.856629 | orchestrator | 2025-10-08 15:57:55 | INFO  | Task 4988d366-0862-4257-b730-a1af7a0acc5a is in state STARTED 2025-10-08 15:57:55.856642 | orchestrator | 2025-10-08 15:57:55 | INFO  | Task 11ec0c4c-b55f-49e3-9b7e-291373ba2e0e is in state STARTED 2025-10-08 15:57:55.856669 | orchestrator | 2025-10-08 15:57:55 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:57:58.869030 | orchestrator | 2025-10-08 15:57:58 | INFO  | Task af272394-32d4-40a7-b33d-073343bea7a9 is in state STARTED 2025-10-08 15:57:58.869208 | orchestrator | 2025-10-08 15:57:58 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 15:57:58.869608 | orchestrator | 2025-10-08 15:57:58 | INFO  | Task 4988d366-0862-4257-b730-a1af7a0acc5a is in state STARTED 2025-10-08 15:57:58.870270 | orchestrator | 2025-10-08 15:57:58 | INFO  | Task 11ec0c4c-b55f-49e3-9b7e-291373ba2e0e is in state STARTED 2025-10-08 15:57:58.870410 | orchestrator | 2025-10-08 15:57:58 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:58:01.903194 | orchestrator | 2025-10-08 15:58:01 | INFO  | Task af272394-32d4-40a7-b33d-073343bea7a9 is in state STARTED 2025-10-08 15:58:01.903314 | orchestrator | 2025-10-08 15:58:01 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 15:58:01.903674 | orchestrator | 2025-10-08 15:58:01 | INFO  | Task 4988d366-0862-4257-b730-a1af7a0acc5a is in state STARTED 2025-10-08 15:58:01.905874 | orchestrator | 2025-10-08 15:58:01 | INFO  | Task 11ec0c4c-b55f-49e3-9b7e-291373ba2e0e is in state STARTED 2025-10-08 15:58:01.905897 | orchestrator | 2025-10-08 15:58:01 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:58:04.933504 | orchestrator | 2025-10-08 15:58:04 | INFO  | Task af272394-32d4-40a7-b33d-073343bea7a9 is in state STARTED 2025-10-08 15:58:04.934693 | orchestrator | 2025-10-08 15:58:04 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 15:58:04.935520 | orchestrator | 2025-10-08 15:58:04 | INFO  | Task 4988d366-0862-4257-b730-a1af7a0acc5a is in state STARTED 2025-10-08 15:58:04.936307 | orchestrator | 2025-10-08 15:58:04 | INFO  | Task 11ec0c4c-b55f-49e3-9b7e-291373ba2e0e is in state STARTED 2025-10-08 15:58:04.936864 | orchestrator | 2025-10-08 15:58:04 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:58:07.964734 | orchestrator | 2025-10-08 15:58:07 | INFO  | Task fc80fd5e-d164-4975-92f7-1cf324d89ea3 is in state STARTED 2025-10-08 15:58:07.966994 | orchestrator | 2025-10-08 15:58:07 | INFO  | Task af272394-32d4-40a7-b33d-073343bea7a9 is in state SUCCESS 2025-10-08 15:58:07.967027 | orchestrator | 2025-10-08 15:58:07 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 15:58:07.967040 | orchestrator | 2025-10-08 15:58:07 | INFO  | Task 4988d366-0862-4257-b730-a1af7a0acc5a is in state STARTED 2025-10-08 15:58:07.968170 | orchestrator | 2025-10-08 15:58:07.968204 | orchestrator | 2025-10-08 15:58:07.968216 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-10-08 15:58:07.968253 | orchestrator | 2025-10-08 15:58:07.968265 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-10-08 15:58:07.968276 | orchestrator | Wednesday 08 October 2025 15:54:21 +0000 (0:00:00.581) 0:00:00.581 ***** 2025-10-08 15:58:07.968287 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:58:07.968300 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:58:07.968310 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:58:07.968321 | orchestrator | ok: [testbed-node-3] 2025-10-08 15:58:07.968332 | orchestrator | ok: [testbed-node-4] 2025-10-08 15:58:07.968343 | orchestrator | ok: [testbed-node-5] 2025-10-08 15:58:07.968355 | orchestrator | 2025-10-08 15:58:07.968366 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-10-08 15:58:07.968377 | orchestrator | Wednesday 08 October 2025 15:54:22 +0000 (0:00:01.147) 0:00:01.729 ***** 2025-10-08 15:58:07.968388 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2025-10-08 15:58:07.968400 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2025-10-08 15:58:07.968411 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2025-10-08 15:58:07.968422 | orchestrator | ok: [testbed-node-3] => (item=enable_cinder_True) 2025-10-08 15:58:07.968432 | orchestrator | ok: [testbed-node-4] => (item=enable_cinder_True) 2025-10-08 15:58:07.968443 | orchestrator | ok: [testbed-node-5] => (item=enable_cinder_True) 2025-10-08 15:58:07.968454 | orchestrator | 2025-10-08 15:58:07.968465 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2025-10-08 15:58:07.968477 | orchestrator | 2025-10-08 15:58:07.968488 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-10-08 15:58:07.968499 | orchestrator | Wednesday 08 October 2025 15:54:23 +0000 (0:00:01.024) 0:00:02.753 ***** 2025-10-08 15:58:07.968510 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-08 15:58:07.968523 | orchestrator | 2025-10-08 15:58:07.968534 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2025-10-08 15:58:07.968545 | orchestrator | Wednesday 08 October 2025 15:54:25 +0000 (0:00:02.166) 0:00:04.920 ***** 2025-10-08 15:58:07.968557 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2025-10-08 15:58:07.968568 | orchestrator | 2025-10-08 15:58:07.968579 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2025-10-08 15:58:07.969114 | orchestrator | Wednesday 08 October 2025 15:54:28 +0000 (0:00:03.317) 0:00:08.238 ***** 2025-10-08 15:58:07.969429 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2025-10-08 15:58:07.969447 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2025-10-08 15:58:07.969459 | orchestrator | 2025-10-08 15:58:07.969470 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2025-10-08 15:58:07.969482 | orchestrator | Wednesday 08 October 2025 15:54:34 +0000 (0:00:05.963) 0:00:14.201 ***** 2025-10-08 15:58:07.969493 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-10-08 15:58:07.969504 | orchestrator | 2025-10-08 15:58:07.969516 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2025-10-08 15:58:07.969527 | orchestrator | Wednesday 08 October 2025 15:54:38 +0000 (0:00:03.135) 0:00:17.336 ***** 2025-10-08 15:58:07.969550 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-10-08 15:58:07.969561 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2025-10-08 15:58:07.969572 | orchestrator | 2025-10-08 15:58:07.969583 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2025-10-08 15:58:07.969594 | orchestrator | Wednesday 08 October 2025 15:54:42 +0000 (0:00:03.974) 0:00:21.311 ***** 2025-10-08 15:58:07.969604 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-10-08 15:58:07.969616 | orchestrator | 2025-10-08 15:58:07.969626 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2025-10-08 15:58:07.969650 | orchestrator | Wednesday 08 October 2025 15:54:45 +0000 (0:00:03.703) 0:00:25.015 ***** 2025-10-08 15:58:07.969661 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2025-10-08 15:58:07.969672 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2025-10-08 15:58:07.969683 | orchestrator | 2025-10-08 15:58:07.969693 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2025-10-08 15:58:07.969704 | orchestrator | Wednesday 08 October 2025 15:54:53 +0000 (0:00:07.657) 0:00:32.672 ***** 2025-10-08 15:58:07.969719 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-10-08 15:58:07.969778 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-10-08 15:58:07.969793 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-10-08 15:58:07.969811 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-10-08 15:58:07.969831 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-10-08 15:58:07.969844 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-10-08 15:58:07.969887 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-10-08 15:58:07.969900 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-10-08 15:58:07.969913 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-10-08 15:58:07.969931 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-10-08 15:58:07.969950 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-10-08 15:58:07.969962 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-10-08 15:58:07.969973 | orchestrator | 2025-10-08 15:58:07.970013 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-10-08 15:58:07.970077 | orchestrator | Wednesday 08 October 2025 15:54:55 +0000 (0:00:02.011) 0:00:34.684 ***** 2025-10-08 15:58:07.970091 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:58:07.970104 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:58:07.970117 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:58:07.970149 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:58:07.970163 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:58:07.970175 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:58:07.970188 | orchestrator | 2025-10-08 15:58:07.970200 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-10-08 15:58:07.970213 | orchestrator | Wednesday 08 October 2025 15:54:56 +0000 (0:00:00.716) 0:00:35.401 ***** 2025-10-08 15:58:07.970226 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:58:07.970239 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:58:07.970251 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:58:07.970264 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-10-08 15:58:07.970277 | orchestrator | 2025-10-08 15:58:07.970289 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2025-10-08 15:58:07.970301 | orchestrator | Wednesday 08 October 2025 15:54:57 +0000 (0:00:01.077) 0:00:36.479 ***** 2025-10-08 15:58:07.970314 | orchestrator | changed: [testbed-node-3] => (item=cinder-volume) 2025-10-08 15:58:07.970327 | orchestrator | changed: [testbed-node-4] => (item=cinder-volume) 2025-10-08 15:58:07.970340 | orchestrator | changed: [testbed-node-5] => (item=cinder-volume) 2025-10-08 15:58:07.970353 | orchestrator | changed: [testbed-node-3] => (item=cinder-backup) 2025-10-08 15:58:07.970365 | orchestrator | changed: [testbed-node-5] => (item=cinder-backup) 2025-10-08 15:58:07.970377 | orchestrator | changed: [testbed-node-4] => (item=cinder-backup) 2025-10-08 15:58:07.970390 | orchestrator | 2025-10-08 15:58:07.970402 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2025-10-08 15:58:07.970414 | orchestrator | Wednesday 08 October 2025 15:54:59 +0000 (0:00:02.545) 0:00:39.024 ***** 2025-10-08 15:58:07.970442 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-10-08 15:58:07.970456 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-10-08 15:58:07.970469 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-10-08 15:58:07.970516 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-10-08 15:58:07.970531 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-10-08 15:58:07.970550 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-10-08 15:58:07.970575 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-10-08 15:58:07.970588 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-10-08 15:58:07.970630 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-10-08 15:58:07.970644 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-10-08 15:58:07.970664 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-10-08 15:58:07.970682 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-10-08 15:58:07.970694 | orchestrator | 2025-10-08 15:58:07.970706 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2025-10-08 15:58:07.970717 | orchestrator | Wednesday 08 October 2025 15:55:03 +0000 (0:00:03.898) 0:00:42.923 ***** 2025-10-08 15:58:07.970729 | orchestrator | changed: [testbed-node-3] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-10-08 15:58:07.970741 | orchestrator | changed: [testbed-node-4] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-10-08 15:58:07.970752 | orchestrator | changed: [testbed-node-5] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-10-08 15:58:07.970763 | orchestrator | 2025-10-08 15:58:07.970775 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2025-10-08 15:58:07.970786 | orchestrator | Wednesday 08 October 2025 15:55:06 +0000 (0:00:02.597) 0:00:45.520 ***** 2025-10-08 15:58:07.970797 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder.keyring) 2025-10-08 15:58:07.970808 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder.keyring) 2025-10-08 15:58:07.970819 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder.keyring) 2025-10-08 15:58:07.970830 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder-backup.keyring) 2025-10-08 15:58:07.970842 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder-backup.keyring) 2025-10-08 15:58:07.970882 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder-backup.keyring) 2025-10-08 15:58:07.970896 | orchestrator | 2025-10-08 15:58:07.970907 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2025-10-08 15:58:07.970918 | orchestrator | Wednesday 08 October 2025 15:55:09 +0000 (0:00:03.404) 0:00:48.924 ***** 2025-10-08 15:58:07.970929 | orchestrator | ok: [testbed-node-3] => (item=cinder-volume) 2025-10-08 15:58:07.970941 | orchestrator | ok: [testbed-node-5] => (item=cinder-volume) 2025-10-08 15:58:07.970952 | orchestrator | ok: [testbed-node-4] => (item=cinder-volume) 2025-10-08 15:58:07.970963 | orchestrator | ok: [testbed-node-3] => (item=cinder-backup) 2025-10-08 15:58:07.970974 | orchestrator | ok: [testbed-node-5] => (item=cinder-backup) 2025-10-08 15:58:07.970993 | orchestrator | ok: [testbed-node-4] => (item=cinder-backup) 2025-10-08 15:58:07.971004 | orchestrator | 2025-10-08 15:58:07.971015 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2025-10-08 15:58:07.971027 | orchestrator | Wednesday 08 October 2025 15:55:10 +0000 (0:00:01.220) 0:00:50.145 ***** 2025-10-08 15:58:07.971038 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:58:07.971049 | orchestrator | 2025-10-08 15:58:07.971060 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2025-10-08 15:58:07.971071 | orchestrator | Wednesday 08 October 2025 15:55:11 +0000 (0:00:00.175) 0:00:50.320 ***** 2025-10-08 15:58:07.971082 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:58:07.971094 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:58:07.971105 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:58:07.971116 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:58:07.971179 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:58:07.971192 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:58:07.971204 | orchestrator | 2025-10-08 15:58:07.971215 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-10-08 15:58:07.971227 | orchestrator | Wednesday 08 October 2025 15:55:11 +0000 (0:00:00.786) 0:00:51.106 ***** 2025-10-08 15:58:07.971239 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-08 15:58:07.971252 | orchestrator | 2025-10-08 15:58:07.971264 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2025-10-08 15:58:07.971275 | orchestrator | Wednesday 08 October 2025 15:55:13 +0000 (0:00:01.278) 0:00:52.385 ***** 2025-10-08 15:58:07.971293 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-10-08 15:58:07.971306 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-10-08 15:58:07.971352 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-10-08 15:58:07.971375 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-10-08 15:58:07.971388 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-10-08 15:58:07.971405 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-10-08 15:58:07.971418 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-10-08 15:58:07.971430 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-10-08 15:58:07.971480 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-10-08 15:58:07.971495 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-10-08 15:58:07.971507 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-10-08 15:58:07.971524 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-10-08 15:58:07.971537 | orchestrator | 2025-10-08 15:58:07.971548 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2025-10-08 15:58:07.971560 | orchestrator | Wednesday 08 October 2025 15:55:16 +0000 (0:00:03.356) 0:00:55.742 ***** 2025-10-08 15:58:07.971571 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-10-08 15:58:07.971593 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-10-08 15:58:07.971604 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:58:07.971616 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-10-08 15:58:07.971627 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-10-08 15:58:07.971637 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:58:07.971652 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-10-08 15:58:07.971664 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-10-08 15:58:07.971674 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:58:07.971685 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-10-08 15:58:07.971710 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-10-08 15:58:07.971721 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:58:07.971732 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-10-08 15:58:07.971743 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-10-08 15:58:07.971753 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:58:07.971768 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-10-08 15:58:07.971786 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-10-08 15:58:07.971797 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:58:07.971807 | orchestrator | 2025-10-08 15:58:07.971817 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2025-10-08 15:58:07.971828 | orchestrator | Wednesday 08 October 2025 15:55:18 +0000 (0:00:02.053) 0:00:57.795 ***** 2025-10-08 15:58:07.971845 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-10-08 15:58:07.971856 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-10-08 15:58:07.971867 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:58:07.971878 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-10-08 15:58:07.971893 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-10-08 15:58:07.971913 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:58:07.971923 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-10-08 15:58:07.971941 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-10-08 15:58:07.971952 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:58:07.971963 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-10-08 15:58:07.971974 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-10-08 15:58:07.971985 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:58:07.972000 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-10-08 15:58:07.972016 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-10-08 15:58:07.972027 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:58:07.972044 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-10-08 15:58:07.972055 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-10-08 15:58:07.972066 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:58:07.972076 | orchestrator | 2025-10-08 15:58:07.972086 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2025-10-08 15:58:07.972097 | orchestrator | Wednesday 08 October 2025 15:55:20 +0000 (0:00:02.062) 0:00:59.858 ***** 2025-10-08 15:58:07.972108 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-10-08 15:58:07.972123 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-10-08 15:58:07.972159 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-10-08 15:58:07.972177 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-10-08 15:58:07.972188 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-10-08 15:58:07.972199 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-10-08 15:58:07.972220 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-10-08 15:58:07.972231 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-10-08 15:58:07.972242 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-10-08 15:58:07.972259 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-10-08 15:58:07.972270 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-10-08 15:58:07.972281 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-10-08 15:58:07.972297 | orchestrator | 2025-10-08 15:58:07.972308 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2025-10-08 15:58:07.972318 | orchestrator | Wednesday 08 October 2025 15:55:24 +0000 (0:00:03.439) 0:01:03.297 ***** 2025-10-08 15:58:07.972328 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-10-08 15:58:07.972339 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:58:07.972353 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-10-08 15:58:07.972364 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-10-08 15:58:07.972374 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:58:07.972384 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-10-08 15:58:07.972394 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:58:07.972405 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-10-08 15:58:07.972415 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-10-08 15:58:07.972425 | orchestrator | 2025-10-08 15:58:07.972435 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2025-10-08 15:58:07.972445 | orchestrator | Wednesday 08 October 2025 15:55:26 +0000 (0:00:02.631) 0:01:05.928 ***** 2025-10-08 15:58:07.972456 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-10-08 15:58:07.972473 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-10-08 15:58:07.972484 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-10-08 15:58:07.972505 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-10-08 15:58:07.972517 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-10-08 15:58:07.972533 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-10-08 15:58:07.972544 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-10-08 15:58:07.972555 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-10-08 15:58:07.972571 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-10-08 15:58:07.972586 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-10-08 15:58:07.972598 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-10-08 15:58:07.972609 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-10-08 15:58:07.972619 | orchestrator | 2025-10-08 15:58:07.972630 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2025-10-08 15:58:07.972640 | orchestrator | Wednesday 08 October 2025 15:55:38 +0000 (0:00:11.361) 0:01:17.290 ***** 2025-10-08 15:58:07.972655 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:58:07.972666 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:58:07.972676 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:58:07.972686 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:58:07.972696 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:58:07.972706 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:58:07.972716 | orchestrator | 2025-10-08 15:58:07.972727 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2025-10-08 15:58:07.972737 | orchestrator | Wednesday 08 October 2025 15:55:41 +0000 (0:00:03.142) 0:01:20.432 ***** 2025-10-08 15:58:07.972748 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-10-08 15:58:07.972765 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-10-08 15:58:07.972775 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:58:07.972791 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-10-08 15:58:07.972802 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-10-08 15:58:07.972813 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:58:07.972830 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-10-08 15:58:07.972841 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-10-08 15:58:07.972864 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:58:07.972875 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-10-08 15:58:07.972886 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-10-08 15:58:07.972901 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:58:07.972912 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-10-08 15:58:07.972923 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-10-08 15:58:07.972934 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:58:07.972950 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-10-08 15:58:07.972969 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-10-08 15:58:07.972980 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:58:07.972990 | orchestrator | 2025-10-08 15:58:07.973000 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2025-10-08 15:58:07.973010 | orchestrator | Wednesday 08 October 2025 15:55:42 +0000 (0:00:01.585) 0:01:22.018 ***** 2025-10-08 15:58:07.973021 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:58:07.973030 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:58:07.973040 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:58:07.973050 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:58:07.973060 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:58:07.973070 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:58:07.973080 | orchestrator | 2025-10-08 15:58:07.973090 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2025-10-08 15:58:07.973101 | orchestrator | Wednesday 08 October 2025 15:55:43 +0000 (0:00:00.535) 0:01:22.553 ***** 2025-10-08 15:58:07.973116 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-10-08 15:58:07.973172 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-10-08 15:58:07.973192 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-10-08 15:58:07.973211 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-10-08 15:58:07.973222 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-10-08 15:58:07.973238 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-10-08 15:58:07.973250 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-10-08 15:58:07.973267 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-10-08 15:58:07.973285 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-10-08 15:58:07.973296 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-10-08 15:58:07.973307 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-10-08 15:58:07.973323 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-10-08 15:58:07.973333 | orchestrator | 2025-10-08 15:58:07.973344 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-10-08 15:58:07.973354 | orchestrator | Wednesday 08 October 2025 15:55:46 +0000 (0:00:02.729) 0:01:25.283 ***** 2025-10-08 15:58:07.973362 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:58:07.973371 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:58:07.973379 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:58:07.973387 | orchestrator | skipping: [testbed-node-3] 2025-10-08 15:58:07.973395 | orchestrator | skipping: [testbed-node-4] 2025-10-08 15:58:07.973403 | orchestrator | skipping: [testbed-node-5] 2025-10-08 15:58:07.973411 | orchestrator | 2025-10-08 15:58:07.973419 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2025-10-08 15:58:07.973434 | orchestrator | Wednesday 08 October 2025 15:55:46 +0000 (0:00:00.585) 0:01:25.868 ***** 2025-10-08 15:58:07.973442 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:58:07.973450 | orchestrator | 2025-10-08 15:58:07.973458 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2025-10-08 15:58:07.973467 | orchestrator | Wednesday 08 October 2025 15:55:49 +0000 (0:00:02.646) 0:01:28.514 ***** 2025-10-08 15:58:07.973475 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:58:07.973483 | orchestrator | 2025-10-08 15:58:07.973491 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2025-10-08 15:58:07.973500 | orchestrator | Wednesday 08 October 2025 15:55:51 +0000 (0:00:02.248) 0:01:30.763 ***** 2025-10-08 15:58:07.973508 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:58:07.973516 | orchestrator | 2025-10-08 15:58:07.973524 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-10-08 15:58:07.973532 | orchestrator | Wednesday 08 October 2025 15:56:13 +0000 (0:00:21.883) 0:01:52.647 ***** 2025-10-08 15:58:07.973540 | orchestrator | 2025-10-08 15:58:07.973552 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-10-08 15:58:07.973561 | orchestrator | Wednesday 08 October 2025 15:56:13 +0000 (0:00:00.068) 0:01:52.715 ***** 2025-10-08 15:58:07.973569 | orchestrator | 2025-10-08 15:58:07.973578 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-10-08 15:58:07.973586 | orchestrator | Wednesday 08 October 2025 15:56:13 +0000 (0:00:00.070) 0:01:52.785 ***** 2025-10-08 15:58:07.973594 | orchestrator | 2025-10-08 15:58:07.973602 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-10-08 15:58:07.973615 | orchestrator | Wednesday 08 October 2025 15:56:13 +0000 (0:00:00.065) 0:01:52.850 ***** 2025-10-08 15:58:07.973623 | orchestrator | 2025-10-08 15:58:07.973631 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-10-08 15:58:07.973640 | orchestrator | Wednesday 08 October 2025 15:56:13 +0000 (0:00:00.079) 0:01:52.930 ***** 2025-10-08 15:58:07.973648 | orchestrator | 2025-10-08 15:58:07.973656 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-10-08 15:58:07.973664 | orchestrator | Wednesday 08 October 2025 15:56:13 +0000 (0:00:00.079) 0:01:53.009 ***** 2025-10-08 15:58:07.973672 | orchestrator | 2025-10-08 15:58:07.973680 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2025-10-08 15:58:07.973688 | orchestrator | Wednesday 08 October 2025 15:56:13 +0000 (0:00:00.072) 0:01:53.082 ***** 2025-10-08 15:58:07.973696 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:58:07.973704 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:58:07.973713 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:58:07.973721 | orchestrator | 2025-10-08 15:58:07.973729 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2025-10-08 15:58:07.973737 | orchestrator | Wednesday 08 October 2025 15:56:38 +0000 (0:00:25.110) 0:02:18.193 ***** 2025-10-08 15:58:07.973745 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:58:07.973753 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:58:07.973761 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:58:07.973769 | orchestrator | 2025-10-08 15:58:07.973777 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2025-10-08 15:58:07.973786 | orchestrator | Wednesday 08 October 2025 15:56:46 +0000 (0:00:07.353) 0:02:25.546 ***** 2025-10-08 15:58:07.973794 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:58:07.973802 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:58:07.973810 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:58:07.973818 | orchestrator | 2025-10-08 15:58:07.973826 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2025-10-08 15:58:07.973834 | orchestrator | Wednesday 08 October 2025 15:57:54 +0000 (0:01:07.909) 0:03:33.456 ***** 2025-10-08 15:58:07.973842 | orchestrator | changed: [testbed-node-3] 2025-10-08 15:58:07.973850 | orchestrator | changed: [testbed-node-5] 2025-10-08 15:58:07.973864 | orchestrator | changed: [testbed-node-4] 2025-10-08 15:58:07.973873 | orchestrator | 2025-10-08 15:58:07.973881 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2025-10-08 15:58:07.973889 | orchestrator | Wednesday 08 October 2025 15:58:03 +0000 (0:00:09.505) 0:03:42.961 ***** 2025-10-08 15:58:07.973897 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:58:07.973905 | orchestrator | 2025-10-08 15:58:07.973913 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-08 15:58:07.973921 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-10-08 15:58:07.973934 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-10-08 15:58:07.973942 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-10-08 15:58:07.973951 | orchestrator | testbed-node-3 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-10-08 15:58:07.973959 | orchestrator | testbed-node-4 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-10-08 15:58:07.973967 | orchestrator | testbed-node-5 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-10-08 15:58:07.973975 | orchestrator | 2025-10-08 15:58:07.973983 | orchestrator | 2025-10-08 15:58:07.973992 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-08 15:58:07.974000 | orchestrator | Wednesday 08 October 2025 15:58:05 +0000 (0:00:01.732) 0:03:44.694 ***** 2025-10-08 15:58:07.974008 | orchestrator | =============================================================================== 2025-10-08 15:58:07.974038 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 67.91s 2025-10-08 15:58:07.974048 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 25.11s 2025-10-08 15:58:07.974056 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 21.88s 2025-10-08 15:58:07.974064 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 11.37s 2025-10-08 15:58:07.974072 | orchestrator | cinder : Restart cinder-backup container -------------------------------- 9.51s 2025-10-08 15:58:07.974079 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 7.66s 2025-10-08 15:58:07.974087 | orchestrator | cinder : Restart cinder-scheduler container ----------------------------- 7.35s 2025-10-08 15:58:07.974095 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 5.96s 2025-10-08 15:58:07.974108 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 3.97s 2025-10-08 15:58:07.974116 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 3.90s 2025-10-08 15:58:07.974124 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.70s 2025-10-08 15:58:07.974148 | orchestrator | cinder : Copying over config.json files for services -------------------- 3.44s 2025-10-08 15:58:07.974156 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 3.40s 2025-10-08 15:58:07.974164 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 3.36s 2025-10-08 15:58:07.974172 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.32s 2025-10-08 15:58:07.974180 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.14s 2025-10-08 15:58:07.974187 | orchestrator | cinder : Generating 'hostnqn' file for cinder_volume -------------------- 3.13s 2025-10-08 15:58:07.974195 | orchestrator | cinder : Check cinder containers ---------------------------------------- 2.73s 2025-10-08 15:58:07.974203 | orchestrator | cinder : Creating Cinder database --------------------------------------- 2.65s 2025-10-08 15:58:07.974219 | orchestrator | cinder : Copying over cinder-wsgi.conf ---------------------------------- 2.63s 2025-10-08 15:58:07.974227 | orchestrator | 2025-10-08 15:58:07 | INFO  | Task 11ec0c4c-b55f-49e3-9b7e-291373ba2e0e is in state STARTED 2025-10-08 15:58:07.974235 | orchestrator | 2025-10-08 15:58:07 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:58:11.001518 | orchestrator | 2025-10-08 15:58:10 | INFO  | Task fc80fd5e-d164-4975-92f7-1cf324d89ea3 is in state STARTED 2025-10-08 15:58:11.001759 | orchestrator | 2025-10-08 15:58:11 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 15:58:11.005863 | orchestrator | 2025-10-08 15:58:11 | INFO  | Task 4988d366-0862-4257-b730-a1af7a0acc5a is in state STARTED 2025-10-08 15:58:11.007016 | orchestrator | 2025-10-08 15:58:11 | INFO  | Task 11ec0c4c-b55f-49e3-9b7e-291373ba2e0e is in state STARTED 2025-10-08 15:58:11.007041 | orchestrator | 2025-10-08 15:58:11 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:58:14.035981 | orchestrator | 2025-10-08 15:58:14 | INFO  | Task fc80fd5e-d164-4975-92f7-1cf324d89ea3 is in state STARTED 2025-10-08 15:58:14.036568 | orchestrator | 2025-10-08 15:58:14 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 15:58:14.036860 | orchestrator | 2025-10-08 15:58:14 | INFO  | Task 4988d366-0862-4257-b730-a1af7a0acc5a is in state STARTED 2025-10-08 15:58:14.040736 | orchestrator | 2025-10-08 15:58:14 | INFO  | Task 11ec0c4c-b55f-49e3-9b7e-291373ba2e0e is in state STARTED 2025-10-08 15:58:14.040798 | orchestrator | 2025-10-08 15:58:14 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:58:17.068014 | orchestrator | 2025-10-08 15:58:17 | INFO  | Task fc80fd5e-d164-4975-92f7-1cf324d89ea3 is in state STARTED 2025-10-08 15:58:17.068109 | orchestrator | 2025-10-08 15:58:17 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 15:58:17.068693 | orchestrator | 2025-10-08 15:58:17 | INFO  | Task 4988d366-0862-4257-b730-a1af7a0acc5a is in state STARTED 2025-10-08 15:58:17.071064 | orchestrator | 2025-10-08 15:58:17 | INFO  | Task 11ec0c4c-b55f-49e3-9b7e-291373ba2e0e is in state STARTED 2025-10-08 15:58:17.071089 | orchestrator | 2025-10-08 15:58:17 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:58:20.136537 | orchestrator | 2025-10-08 15:58:20 | INFO  | Task fc80fd5e-d164-4975-92f7-1cf324d89ea3 is in state STARTED 2025-10-08 15:58:20.137442 | orchestrator | 2025-10-08 15:58:20 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 15:58:20.139876 | orchestrator | 2025-10-08 15:58:20 | INFO  | Task 4988d366-0862-4257-b730-a1af7a0acc5a is in state STARTED 2025-10-08 15:58:20.142750 | orchestrator | 2025-10-08 15:58:20 | INFO  | Task 11ec0c4c-b55f-49e3-9b7e-291373ba2e0e is in state STARTED 2025-10-08 15:58:20.142776 | orchestrator | 2025-10-08 15:58:20 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:58:23.169936 | orchestrator | 2025-10-08 15:58:23 | INFO  | Task fc80fd5e-d164-4975-92f7-1cf324d89ea3 is in state STARTED 2025-10-08 15:58:23.170756 | orchestrator | 2025-10-08 15:58:23 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 15:58:23.171695 | orchestrator | 2025-10-08 15:58:23 | INFO  | Task 4988d366-0862-4257-b730-a1af7a0acc5a is in state STARTED 2025-10-08 15:58:23.172748 | orchestrator | 2025-10-08 15:58:23 | INFO  | Task 11ec0c4c-b55f-49e3-9b7e-291373ba2e0e is in state STARTED 2025-10-08 15:58:23.172772 | orchestrator | 2025-10-08 15:58:23 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:58:26.207925 | orchestrator | 2025-10-08 15:58:26 | INFO  | Task fc80fd5e-d164-4975-92f7-1cf324d89ea3 is in state STARTED 2025-10-08 15:58:26.208054 | orchestrator | 2025-10-08 15:58:26 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 15:58:26.208783 | orchestrator | 2025-10-08 15:58:26 | INFO  | Task 4988d366-0862-4257-b730-a1af7a0acc5a is in state STARTED 2025-10-08 15:58:26.209576 | orchestrator | 2025-10-08 15:58:26 | INFO  | Task 11ec0c4c-b55f-49e3-9b7e-291373ba2e0e is in state STARTED 2025-10-08 15:58:26.209595 | orchestrator | 2025-10-08 15:58:26 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:58:29.237706 | orchestrator | 2025-10-08 15:58:29 | INFO  | Task fc80fd5e-d164-4975-92f7-1cf324d89ea3 is in state STARTED 2025-10-08 15:58:29.240444 | orchestrator | 2025-10-08 15:58:29 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 15:58:29.243862 | orchestrator | 2025-10-08 15:58:29 | INFO  | Task 4988d366-0862-4257-b730-a1af7a0acc5a is in state STARTED 2025-10-08 15:58:29.244610 | orchestrator | 2025-10-08 15:58:29 | INFO  | Task 11ec0c4c-b55f-49e3-9b7e-291373ba2e0e is in state STARTED 2025-10-08 15:58:29.244634 | orchestrator | 2025-10-08 15:58:29 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:58:32.271511 | orchestrator | 2025-10-08 15:58:32 | INFO  | Task fc80fd5e-d164-4975-92f7-1cf324d89ea3 is in state STARTED 2025-10-08 15:58:32.273494 | orchestrator | 2025-10-08 15:58:32 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 15:58:32.274287 | orchestrator | 2025-10-08 15:58:32 | INFO  | Task 4988d366-0862-4257-b730-a1af7a0acc5a is in state STARTED 2025-10-08 15:58:32.275080 | orchestrator | 2025-10-08 15:58:32 | INFO  | Task 11ec0c4c-b55f-49e3-9b7e-291373ba2e0e is in state STARTED 2025-10-08 15:58:32.275104 | orchestrator | 2025-10-08 15:58:32 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:58:35.307114 | orchestrator | 2025-10-08 15:58:35 | INFO  | Task fc80fd5e-d164-4975-92f7-1cf324d89ea3 is in state STARTED 2025-10-08 15:58:35.307254 | orchestrator | 2025-10-08 15:58:35 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 15:58:35.307838 | orchestrator | 2025-10-08 15:58:35 | INFO  | Task 4988d366-0862-4257-b730-a1af7a0acc5a is in state STARTED 2025-10-08 15:58:35.308514 | orchestrator | 2025-10-08 15:58:35 | INFO  | Task 11ec0c4c-b55f-49e3-9b7e-291373ba2e0e is in state STARTED 2025-10-08 15:58:35.308536 | orchestrator | 2025-10-08 15:58:35 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:58:38.346608 | orchestrator | 2025-10-08 15:58:38 | INFO  | Task fc80fd5e-d164-4975-92f7-1cf324d89ea3 is in state STARTED 2025-10-08 15:58:38.346837 | orchestrator | 2025-10-08 15:58:38 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 15:58:38.348467 | orchestrator | 2025-10-08 15:58:38 | INFO  | Task 4988d366-0862-4257-b730-a1af7a0acc5a is in state STARTED 2025-10-08 15:58:38.349018 | orchestrator | 2025-10-08 15:58:38 | INFO  | Task 11ec0c4c-b55f-49e3-9b7e-291373ba2e0e is in state STARTED 2025-10-08 15:58:38.350153 | orchestrator | 2025-10-08 15:58:38 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:58:41.382068 | orchestrator | 2025-10-08 15:58:41 | INFO  | Task fc80fd5e-d164-4975-92f7-1cf324d89ea3 is in state STARTED 2025-10-08 15:58:41.382212 | orchestrator | 2025-10-08 15:58:41 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 15:58:41.382914 | orchestrator | 2025-10-08 15:58:41 | INFO  | Task 4988d366-0862-4257-b730-a1af7a0acc5a is in state STARTED 2025-10-08 15:58:41.384479 | orchestrator | 2025-10-08 15:58:41 | INFO  | Task 11ec0c4c-b55f-49e3-9b7e-291373ba2e0e is in state STARTED 2025-10-08 15:58:41.384532 | orchestrator | 2025-10-08 15:58:41 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:58:44.549582 | orchestrator | 2025-10-08 15:58:44 | INFO  | Task fc80fd5e-d164-4975-92f7-1cf324d89ea3 is in state STARTED 2025-10-08 15:58:44.549720 | orchestrator | 2025-10-08 15:58:44 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 15:58:44.549736 | orchestrator | 2025-10-08 15:58:44 | INFO  | Task 4988d366-0862-4257-b730-a1af7a0acc5a is in state STARTED 2025-10-08 15:58:44.549748 | orchestrator | 2025-10-08 15:58:44 | INFO  | Task 11ec0c4c-b55f-49e3-9b7e-291373ba2e0e is in state STARTED 2025-10-08 15:58:44.549759 | orchestrator | 2025-10-08 15:58:44 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:58:47.508567 | orchestrator | 2025-10-08 15:58:47 | INFO  | Task fc80fd5e-d164-4975-92f7-1cf324d89ea3 is in state STARTED 2025-10-08 15:58:47.508665 | orchestrator | 2025-10-08 15:58:47 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 15:58:47.509279 | orchestrator | 2025-10-08 15:58:47 | INFO  | Task 4988d366-0862-4257-b730-a1af7a0acc5a is in state STARTED 2025-10-08 15:58:47.510228 | orchestrator | 2025-10-08 15:58:47 | INFO  | Task 11ec0c4c-b55f-49e3-9b7e-291373ba2e0e is in state STARTED 2025-10-08 15:58:47.510240 | orchestrator | 2025-10-08 15:58:47 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:58:50.580715 | orchestrator | 2025-10-08 15:58:50 | INFO  | Task fc80fd5e-d164-4975-92f7-1cf324d89ea3 is in state STARTED 2025-10-08 15:58:50.581521 | orchestrator | 2025-10-08 15:58:50 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 15:58:50.581568 | orchestrator | 2025-10-08 15:58:50 | INFO  | Task 4988d366-0862-4257-b730-a1af7a0acc5a is in state STARTED 2025-10-08 15:58:50.582151 | orchestrator | 2025-10-08 15:58:50 | INFO  | Task 11ec0c4c-b55f-49e3-9b7e-291373ba2e0e is in state STARTED 2025-10-08 15:58:50.582177 | orchestrator | 2025-10-08 15:58:50 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:58:53.607838 | orchestrator | 2025-10-08 15:58:53 | INFO  | Task fc80fd5e-d164-4975-92f7-1cf324d89ea3 is in state STARTED 2025-10-08 15:58:53.608076 | orchestrator | 2025-10-08 15:58:53 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 15:58:53.609087 | orchestrator | 2025-10-08 15:58:53 | INFO  | Task 4988d366-0862-4257-b730-a1af7a0acc5a is in state STARTED 2025-10-08 15:58:53.611587 | orchestrator | 2025-10-08 15:58:53 | INFO  | Task 11ec0c4c-b55f-49e3-9b7e-291373ba2e0e is in state STARTED 2025-10-08 15:58:53.611616 | orchestrator | 2025-10-08 15:58:53 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:58:56.636889 | orchestrator | 2025-10-08 15:58:56 | INFO  | Task fc80fd5e-d164-4975-92f7-1cf324d89ea3 is in state STARTED 2025-10-08 15:58:56.638005 | orchestrator | 2025-10-08 15:58:56 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 15:58:56.638128 | orchestrator | 2025-10-08 15:58:56 | INFO  | Task 4988d366-0862-4257-b730-a1af7a0acc5a is in state STARTED 2025-10-08 15:58:56.638759 | orchestrator | 2025-10-08 15:58:56 | INFO  | Task 11ec0c4c-b55f-49e3-9b7e-291373ba2e0e is in state STARTED 2025-10-08 15:58:56.639662 | orchestrator | 2025-10-08 15:58:56 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:58:59.664997 | orchestrator | 2025-10-08 15:58:59 | INFO  | Task fc80fd5e-d164-4975-92f7-1cf324d89ea3 is in state STARTED 2025-10-08 15:58:59.666280 | orchestrator | 2025-10-08 15:58:59 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 15:58:59.666883 | orchestrator | 2025-10-08 15:58:59 | INFO  | Task 4988d366-0862-4257-b730-a1af7a0acc5a is in state STARTED 2025-10-08 15:58:59.667557 | orchestrator | 2025-10-08 15:58:59 | INFO  | Task 11ec0c4c-b55f-49e3-9b7e-291373ba2e0e is in state STARTED 2025-10-08 15:58:59.667582 | orchestrator | 2025-10-08 15:58:59 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:59:02.689617 | orchestrator | 2025-10-08 15:59:02 | INFO  | Task fc80fd5e-d164-4975-92f7-1cf324d89ea3 is in state STARTED 2025-10-08 15:59:02.689750 | orchestrator | 2025-10-08 15:59:02 | INFO  | Task c9acb2ec-d69c-41f2-b09a-cfb2bdbcdae2 is in state STARTED 2025-10-08 15:59:02.690285 | orchestrator | 2025-10-08 15:59:02 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 15:59:02.691961 | orchestrator | 2025-10-08 15:59:02 | INFO  | Task 4988d366-0862-4257-b730-a1af7a0acc5a is in state SUCCESS 2025-10-08 15:59:02.693872 | orchestrator | 2025-10-08 15:59:02.693907 | orchestrator | 2025-10-08 15:59:02.693920 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-10-08 15:59:02.693933 | orchestrator | 2025-10-08 15:59:02.693945 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-10-08 15:59:02.693957 | orchestrator | Wednesday 08 October 2025 15:56:56 +0000 (0:00:00.258) 0:00:00.258 ***** 2025-10-08 15:59:02.693968 | orchestrator | ok: [testbed-node-0] 2025-10-08 15:59:02.693981 | orchestrator | ok: [testbed-node-1] 2025-10-08 15:59:02.693992 | orchestrator | ok: [testbed-node-2] 2025-10-08 15:59:02.694003 | orchestrator | 2025-10-08 15:59:02.694014 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-10-08 15:59:02.694079 | orchestrator | Wednesday 08 October 2025 15:56:56 +0000 (0:00:00.267) 0:00:00.526 ***** 2025-10-08 15:59:02.694091 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2025-10-08 15:59:02.694103 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2025-10-08 15:59:02.694159 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2025-10-08 15:59:02.694172 | orchestrator | 2025-10-08 15:59:02.694182 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2025-10-08 15:59:02.694194 | orchestrator | 2025-10-08 15:59:02.694204 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-10-08 15:59:02.694216 | orchestrator | Wednesday 08 October 2025 15:56:57 +0000 (0:00:00.313) 0:00:00.839 ***** 2025-10-08 15:59:02.694227 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-08 15:59:02.694241 | orchestrator | 2025-10-08 15:59:02.694253 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2025-10-08 15:59:02.694264 | orchestrator | Wednesday 08 October 2025 15:56:57 +0000 (0:00:00.482) 0:00:01.322 ***** 2025-10-08 15:59:02.694276 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2025-10-08 15:59:02.694287 | orchestrator | 2025-10-08 15:59:02.694298 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2025-10-08 15:59:02.694309 | orchestrator | Wednesday 08 October 2025 15:57:01 +0000 (0:00:03.557) 0:00:04.880 ***** 2025-10-08 15:59:02.694320 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2025-10-08 15:59:02.694332 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2025-10-08 15:59:02.694343 | orchestrator | 2025-10-08 15:59:02.694354 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2025-10-08 15:59:02.694364 | orchestrator | Wednesday 08 October 2025 15:57:07 +0000 (0:00:06.553) 0:00:11.433 ***** 2025-10-08 15:59:02.694375 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-10-08 15:59:02.694387 | orchestrator | 2025-10-08 15:59:02.694398 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2025-10-08 15:59:02.694408 | orchestrator | Wednesday 08 October 2025 15:57:10 +0000 (0:00:03.272) 0:00:14.706 ***** 2025-10-08 15:59:02.694446 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-10-08 15:59:02.694458 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2025-10-08 15:59:02.694471 | orchestrator | 2025-10-08 15:59:02.694483 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2025-10-08 15:59:02.694495 | orchestrator | Wednesday 08 October 2025 15:57:15 +0000 (0:00:04.058) 0:00:18.764 ***** 2025-10-08 15:59:02.694508 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-10-08 15:59:02.694521 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2025-10-08 15:59:02.694533 | orchestrator | changed: [testbed-node-0] => (item=creator) 2025-10-08 15:59:02.694546 | orchestrator | changed: [testbed-node-0] => (item=observer) 2025-10-08 15:59:02.694558 | orchestrator | changed: [testbed-node-0] => (item=audit) 2025-10-08 15:59:02.694570 | orchestrator | 2025-10-08 15:59:02.694582 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2025-10-08 15:59:02.694594 | orchestrator | Wednesday 08 October 2025 15:57:31 +0000 (0:00:16.681) 0:00:35.446 ***** 2025-10-08 15:59:02.694607 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2025-10-08 15:59:02.694619 | orchestrator | 2025-10-08 15:59:02.694631 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2025-10-08 15:59:02.694643 | orchestrator | Wednesday 08 October 2025 15:57:36 +0000 (0:00:04.354) 0:00:39.800 ***** 2025-10-08 15:59:02.694678 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-10-08 15:59:02.694711 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-10-08 15:59:02.694725 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-10-08 15:59:02.694748 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-10-08 15:59:02.694762 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-10-08 15:59:02.694781 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-10-08 15:59:02.694804 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-10-08 15:59:02.694820 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-10-08 15:59:02.694833 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-10-08 15:59:02.694853 | orchestrator | 2025-10-08 15:59:02.694864 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2025-10-08 15:59:02.694876 | orchestrator | Wednesday 08 October 2025 15:57:38 +0000 (0:00:02.123) 0:00:41.924 ***** 2025-10-08 15:59:02.694887 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2025-10-08 15:59:02.694898 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2025-10-08 15:59:02.694908 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2025-10-08 15:59:02.694919 | orchestrator | 2025-10-08 15:59:02.694930 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2025-10-08 15:59:02.694941 | orchestrator | Wednesday 08 October 2025 15:57:39 +0000 (0:00:01.386) 0:00:43.311 ***** 2025-10-08 15:59:02.694952 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:59:02.694964 | orchestrator | 2025-10-08 15:59:02.694974 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2025-10-08 15:59:02.694985 | orchestrator | Wednesday 08 October 2025 15:57:39 +0000 (0:00:00.131) 0:00:43.443 ***** 2025-10-08 15:59:02.694996 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:59:02.695007 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:59:02.695018 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:59:02.695029 | orchestrator | 2025-10-08 15:59:02.695040 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-10-08 15:59:02.695051 | orchestrator | Wednesday 08 October 2025 15:57:40 +0000 (0:00:00.503) 0:00:43.947 ***** 2025-10-08 15:59:02.695062 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-08 15:59:02.695072 | orchestrator | 2025-10-08 15:59:02.695083 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2025-10-08 15:59:02.695094 | orchestrator | Wednesday 08 October 2025 15:57:40 +0000 (0:00:00.532) 0:00:44.479 ***** 2025-10-08 15:59:02.695112 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-10-08 15:59:02.695159 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-10-08 15:59:02.695172 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-10-08 15:59:02.695192 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-10-08 15:59:02.695205 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-10-08 15:59:02.695216 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-10-08 15:59:02.695233 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-10-08 15:59:02.695254 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-10-08 15:59:02.695266 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-10-08 15:59:02.695284 | orchestrator | 2025-10-08 15:59:02.695295 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2025-10-08 15:59:02.695306 | orchestrator | Wednesday 08 October 2025 15:57:44 +0000 (0:00:03.466) 0:00:47.946 ***** 2025-10-08 15:59:02.695318 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-10-08 15:59:02.695329 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-10-08 15:59:02.695342 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-10-08 15:59:02.695353 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:59:02.695376 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-10-08 15:59:02.695396 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-10-08 15:59:02.695407 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-10-08 15:59:02.695419 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:59:02.695431 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-10-08 15:59:02.695442 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-10-08 15:59:02.695460 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-10-08 15:59:02.695471 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:59:02.695482 | orchestrator | 2025-10-08 15:59:02.695493 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2025-10-08 15:59:02.695504 | orchestrator | Wednesday 08 October 2025 15:57:45 +0000 (0:00:01.491) 0:00:49.438 ***** 2025-10-08 15:59:02.695524 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-10-08 15:59:02.695543 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-10-08 15:59:02.695555 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-10-08 15:59:02.695566 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:59:02.695578 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-10-08 15:59:02.695595 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-10-08 15:59:02.695607 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-10-08 15:59:02.695631 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:59:02.695651 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-10-08 15:59:02.695662 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-10-08 15:59:02.695674 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-10-08 15:59:02.695685 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:59:02.695696 | orchestrator | 2025-10-08 15:59:02.695707 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2025-10-08 15:59:02.695718 | orchestrator | Wednesday 08 October 2025 15:57:46 +0000 (0:00:00.884) 0:00:50.322 ***** 2025-10-08 15:59:02.695730 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-10-08 15:59:02.695753 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-10-08 15:59:02.695772 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-10-08 15:59:02.695784 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-10-08 15:59:02.695795 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-10-08 15:59:02.695807 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-10-08 15:59:02.695823 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-10-08 15:59:02.695853 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-10-08 15:59:02.695865 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-10-08 15:59:02.695876 | orchestrator | 2025-10-08 15:59:02.695887 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2025-10-08 15:59:02.695898 | orchestrator | Wednesday 08 October 2025 15:57:50 +0000 (0:00:03.665) 0:00:53.988 ***** 2025-10-08 15:59:02.695910 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:59:02.695921 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:59:02.695931 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:59:02.695942 | orchestrator | 2025-10-08 15:59:02.695953 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2025-10-08 15:59:02.695964 | orchestrator | Wednesday 08 October 2025 15:57:52 +0000 (0:00:02.169) 0:00:56.158 ***** 2025-10-08 15:59:02.695975 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-10-08 15:59:02.695986 | orchestrator | 2025-10-08 15:59:02.695997 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2025-10-08 15:59:02.696008 | orchestrator | Wednesday 08 October 2025 15:57:53 +0000 (0:00:00.881) 0:00:57.039 ***** 2025-10-08 15:59:02.696019 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:59:02.696030 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:59:02.696041 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:59:02.696052 | orchestrator | 2025-10-08 15:59:02.696063 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2025-10-08 15:59:02.696074 | orchestrator | Wednesday 08 October 2025 15:57:53 +0000 (0:00:00.497) 0:00:57.536 ***** 2025-10-08 15:59:02.696085 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-10-08 15:59:02.696102 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-10-08 15:59:02.696127 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-10-08 15:59:02.696190 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-10-08 15:59:02.696203 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-10-08 15:59:02.696215 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-10-08 15:59:02.696227 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-10-08 15:59:02.696252 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-10-08 15:59:02.696264 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-10-08 15:59:02.696275 | orchestrator | 2025-10-08 15:59:02.696286 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2025-10-08 15:59:02.696297 | orchestrator | Wednesday 08 October 2025 15:58:04 +0000 (0:00:10.306) 0:01:07.843 ***** 2025-10-08 15:59:02.696317 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-10-08 15:59:02.696329 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-10-08 15:59:02.696341 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-10-08 15:59:02.696352 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:59:02.696371 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-10-08 15:59:02.696388 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-10-08 15:59:02.696406 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-10-08 15:59:02.696418 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:59:02.696429 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-10-08 15:59:02.696439 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-10-08 15:59:02.696449 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-10-08 15:59:02.696467 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:59:02.696477 | orchestrator | 2025-10-08 15:59:02.696487 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2025-10-08 15:59:02.696497 | orchestrator | Wednesday 08 October 2025 15:58:05 +0000 (0:00:01.410) 0:01:09.253 ***** 2025-10-08 15:59:02.696512 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-10-08 15:59:02.696529 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-10-08 15:59:02.696540 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-10-08 15:59:02.696551 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-10-08 15:59:02.696568 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-10-08 15:59:02.696583 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-10-08 15:59:02.696593 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-10-08 15:59:02.696612 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-10-08 15:59:02.696623 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-10-08 15:59:02.696633 | orchestrator | 2025-10-08 15:59:02.696643 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-10-08 15:59:02.696653 | orchestrator | Wednesday 08 October 2025 15:58:09 +0000 (0:00:04.138) 0:01:13.392 ***** 2025-10-08 15:59:02.696663 | orchestrator | skipping: [testbed-node-0] 2025-10-08 15:59:02.696673 | orchestrator | skipping: [testbed-node-1] 2025-10-08 15:59:02.696683 | orchestrator | skipping: [testbed-node-2] 2025-10-08 15:59:02.696693 | orchestrator | 2025-10-08 15:59:02.696702 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2025-10-08 15:59:02.696712 | orchestrator | Wednesday 08 October 2025 15:58:10 +0000 (0:00:00.525) 0:01:13.918 ***** 2025-10-08 15:59:02.696731 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:59:02.696741 | orchestrator | 2025-10-08 15:59:02.696751 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2025-10-08 15:59:02.696761 | orchestrator | Wednesday 08 October 2025 15:58:12 +0000 (0:00:02.503) 0:01:16.422 ***** 2025-10-08 15:59:02.696770 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:59:02.696780 | orchestrator | 2025-10-08 15:59:02.696790 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2025-10-08 15:59:02.696799 | orchestrator | Wednesday 08 October 2025 15:58:15 +0000 (0:00:02.417) 0:01:18.839 ***** 2025-10-08 15:59:02.696809 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:59:02.696819 | orchestrator | 2025-10-08 15:59:02.696828 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-10-08 15:59:02.696838 | orchestrator | Wednesday 08 October 2025 15:58:28 +0000 (0:00:13.663) 0:01:32.502 ***** 2025-10-08 15:59:02.696848 | orchestrator | 2025-10-08 15:59:02.696858 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-10-08 15:59:02.696867 | orchestrator | Wednesday 08 October 2025 15:58:28 +0000 (0:00:00.179) 0:01:32.682 ***** 2025-10-08 15:59:02.696877 | orchestrator | 2025-10-08 15:59:02.696887 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-10-08 15:59:02.696896 | orchestrator | Wednesday 08 October 2025 15:58:29 +0000 (0:00:00.063) 0:01:32.746 ***** 2025-10-08 15:59:02.696906 | orchestrator | 2025-10-08 15:59:02.696915 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2025-10-08 15:59:02.696925 | orchestrator | Wednesday 08 October 2025 15:58:29 +0000 (0:00:00.069) 0:01:32.815 ***** 2025-10-08 15:59:02.696935 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:59:02.696945 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:59:02.696954 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:59:02.696964 | orchestrator | 2025-10-08 15:59:02.696973 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2025-10-08 15:59:02.696983 | orchestrator | Wednesday 08 October 2025 15:58:36 +0000 (0:00:07.701) 0:01:40.517 ***** 2025-10-08 15:59:02.696993 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:59:02.697002 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:59:02.697012 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:59:02.697021 | orchestrator | 2025-10-08 15:59:02.697031 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2025-10-08 15:59:02.697041 | orchestrator | Wednesday 08 October 2025 15:58:48 +0000 (0:00:11.306) 0:01:51.824 ***** 2025-10-08 15:59:02.697050 | orchestrator | changed: [testbed-node-0] 2025-10-08 15:59:02.697060 | orchestrator | changed: [testbed-node-2] 2025-10-08 15:59:02.697069 | orchestrator | changed: [testbed-node-1] 2025-10-08 15:59:02.697079 | orchestrator | 2025-10-08 15:59:02.697088 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-08 15:59:02.697104 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-10-08 15:59:02.697116 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-10-08 15:59:02.697126 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-10-08 15:59:02.697151 | orchestrator | 2025-10-08 15:59:02.697161 | orchestrator | 2025-10-08 15:59:02.697171 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-08 15:59:02.697181 | orchestrator | Wednesday 08 October 2025 15:59:00 +0000 (0:00:11.963) 0:02:03.788 ***** 2025-10-08 15:59:02.697190 | orchestrator | =============================================================================== 2025-10-08 15:59:02.697200 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 16.68s 2025-10-08 15:59:02.697215 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 13.66s 2025-10-08 15:59:02.697233 | orchestrator | barbican : Restart barbican-worker container --------------------------- 11.96s 2025-10-08 15:59:02.697243 | orchestrator | barbican : Restart barbican-keystone-listener container ---------------- 11.31s 2025-10-08 15:59:02.697252 | orchestrator | barbican : Copying over barbican.conf ---------------------------------- 10.31s 2025-10-08 15:59:02.697262 | orchestrator | barbican : Restart barbican-api container ------------------------------- 7.70s 2025-10-08 15:59:02.697272 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 6.55s 2025-10-08 15:59:02.697281 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 4.35s 2025-10-08 15:59:02.697291 | orchestrator | barbican : Check barbican containers ------------------------------------ 4.15s 2025-10-08 15:59:02.697301 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 4.06s 2025-10-08 15:59:02.697310 | orchestrator | barbican : Copying over config.json files for services ------------------ 3.67s 2025-10-08 15:59:02.697320 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.56s 2025-10-08 15:59:02.697330 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 3.47s 2025-10-08 15:59:02.697340 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.27s 2025-10-08 15:59:02.697349 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.50s 2025-10-08 15:59:02.697359 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.42s 2025-10-08 15:59:02.697369 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 2.17s 2025-10-08 15:59:02.697378 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 2.12s 2025-10-08 15:59:02.697388 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS certificate --- 1.49s 2025-10-08 15:59:02.697398 | orchestrator | barbican : Copying over existing policy file ---------------------------- 1.41s 2025-10-08 15:59:02.697408 | orchestrator | 2025-10-08 15:59:02 | INFO  | Task 11ec0c4c-b55f-49e3-9b7e-291373ba2e0e is in state STARTED 2025-10-08 15:59:02.697418 | orchestrator | 2025-10-08 15:59:02 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:59:05.723622 | orchestrator | 2025-10-08 15:59:05 | INFO  | Task fc80fd5e-d164-4975-92f7-1cf324d89ea3 is in state STARTED 2025-10-08 15:59:05.723743 | orchestrator | 2025-10-08 15:59:05 | INFO  | Task c9acb2ec-d69c-41f2-b09a-cfb2bdbcdae2 is in state STARTED 2025-10-08 15:59:05.724099 | orchestrator | 2025-10-08 15:59:05 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 15:59:05.724684 | orchestrator | 2025-10-08 15:59:05 | INFO  | Task 11ec0c4c-b55f-49e3-9b7e-291373ba2e0e is in state STARTED 2025-10-08 15:59:05.724751 | orchestrator | 2025-10-08 15:59:05 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:59:08.752985 | orchestrator | 2025-10-08 15:59:08 | INFO  | Task fc80fd5e-d164-4975-92f7-1cf324d89ea3 is in state STARTED 2025-10-08 15:59:08.753183 | orchestrator | 2025-10-08 15:59:08 | INFO  | Task c9acb2ec-d69c-41f2-b09a-cfb2bdbcdae2 is in state STARTED 2025-10-08 15:59:08.753612 | orchestrator | 2025-10-08 15:59:08 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 15:59:08.754482 | orchestrator | 2025-10-08 15:59:08 | INFO  | Task 11ec0c4c-b55f-49e3-9b7e-291373ba2e0e is in state STARTED 2025-10-08 15:59:08.754572 | orchestrator | 2025-10-08 15:59:08 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:59:11.781463 | orchestrator | 2025-10-08 15:59:11 | INFO  | Task fc80fd5e-d164-4975-92f7-1cf324d89ea3 is in state STARTED 2025-10-08 15:59:11.782450 | orchestrator | 2025-10-08 15:59:11 | INFO  | Task c9acb2ec-d69c-41f2-b09a-cfb2bdbcdae2 is in state STARTED 2025-10-08 15:59:11.783072 | orchestrator | 2025-10-08 15:59:11 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 15:59:11.784028 | orchestrator | 2025-10-08 15:59:11 | INFO  | Task 11ec0c4c-b55f-49e3-9b7e-291373ba2e0e is in state STARTED 2025-10-08 15:59:11.784054 | orchestrator | 2025-10-08 15:59:11 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:59:14.806270 | orchestrator | 2025-10-08 15:59:14 | INFO  | Task fc80fd5e-d164-4975-92f7-1cf324d89ea3 is in state STARTED 2025-10-08 15:59:14.807523 | orchestrator | 2025-10-08 15:59:14 | INFO  | Task c9acb2ec-d69c-41f2-b09a-cfb2bdbcdae2 is in state STARTED 2025-10-08 15:59:14.808212 | orchestrator | 2025-10-08 15:59:14 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 15:59:14.808691 | orchestrator | 2025-10-08 15:59:14 | INFO  | Task 11ec0c4c-b55f-49e3-9b7e-291373ba2e0e is in state STARTED 2025-10-08 15:59:14.808714 | orchestrator | 2025-10-08 15:59:14 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:59:17.833403 | orchestrator | 2025-10-08 15:59:17 | INFO  | Task fc80fd5e-d164-4975-92f7-1cf324d89ea3 is in state STARTED 2025-10-08 15:59:17.837977 | orchestrator | 2025-10-08 15:59:17 | INFO  | Task c9acb2ec-d69c-41f2-b09a-cfb2bdbcdae2 is in state STARTED 2025-10-08 15:59:17.838010 | orchestrator | 2025-10-08 15:59:17 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 15:59:17.838066 | orchestrator | 2025-10-08 15:59:17 | INFO  | Task 11ec0c4c-b55f-49e3-9b7e-291373ba2e0e is in state STARTED 2025-10-08 15:59:17.838078 | orchestrator | 2025-10-08 15:59:17 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:59:20.864871 | orchestrator | 2025-10-08 15:59:20 | INFO  | Task fc80fd5e-d164-4975-92f7-1cf324d89ea3 is in state STARTED 2025-10-08 15:59:20.865491 | orchestrator | 2025-10-08 15:59:20 | INFO  | Task c9acb2ec-d69c-41f2-b09a-cfb2bdbcdae2 is in state STARTED 2025-10-08 15:59:20.867104 | orchestrator | 2025-10-08 15:59:20 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 15:59:20.868818 | orchestrator | 2025-10-08 15:59:20 | INFO  | Task 11ec0c4c-b55f-49e3-9b7e-291373ba2e0e is in state STARTED 2025-10-08 15:59:20.869263 | orchestrator | 2025-10-08 15:59:20 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:59:23.898664 | orchestrator | 2025-10-08 15:59:23 | INFO  | Task fc80fd5e-d164-4975-92f7-1cf324d89ea3 is in state STARTED 2025-10-08 15:59:23.898910 | orchestrator | 2025-10-08 15:59:23 | INFO  | Task c9acb2ec-d69c-41f2-b09a-cfb2bdbcdae2 is in state STARTED 2025-10-08 15:59:23.899779 | orchestrator | 2025-10-08 15:59:23 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 15:59:23.900661 | orchestrator | 2025-10-08 15:59:23 | INFO  | Task 11ec0c4c-b55f-49e3-9b7e-291373ba2e0e is in state STARTED 2025-10-08 15:59:23.900703 | orchestrator | 2025-10-08 15:59:23 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:59:26.936166 | orchestrator | 2025-10-08 15:59:26 | INFO  | Task fc80fd5e-d164-4975-92f7-1cf324d89ea3 is in state STARTED 2025-10-08 15:59:26.936308 | orchestrator | 2025-10-08 15:59:26 | INFO  | Task c9acb2ec-d69c-41f2-b09a-cfb2bdbcdae2 is in state STARTED 2025-10-08 15:59:26.936670 | orchestrator | 2025-10-08 15:59:26 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 15:59:26.937418 | orchestrator | 2025-10-08 15:59:26 | INFO  | Task 11ec0c4c-b55f-49e3-9b7e-291373ba2e0e is in state STARTED 2025-10-08 15:59:26.937440 | orchestrator | 2025-10-08 15:59:26 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:59:29.973274 | orchestrator | 2025-10-08 15:59:29 | INFO  | Task fc80fd5e-d164-4975-92f7-1cf324d89ea3 is in state STARTED 2025-10-08 15:59:29.973869 | orchestrator | 2025-10-08 15:59:29 | INFO  | Task c9acb2ec-d69c-41f2-b09a-cfb2bdbcdae2 is in state STARTED 2025-10-08 15:59:29.976288 | orchestrator | 2025-10-08 15:59:29 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 15:59:29.977267 | orchestrator | 2025-10-08 15:59:29 | INFO  | Task 11ec0c4c-b55f-49e3-9b7e-291373ba2e0e is in state STARTED 2025-10-08 15:59:29.980026 | orchestrator | 2025-10-08 15:59:29 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:59:33.017842 | orchestrator | 2025-10-08 15:59:33 | INFO  | Task fc80fd5e-d164-4975-92f7-1cf324d89ea3 is in state STARTED 2025-10-08 15:59:33.019425 | orchestrator | 2025-10-08 15:59:33 | INFO  | Task c9acb2ec-d69c-41f2-b09a-cfb2bdbcdae2 is in state STARTED 2025-10-08 15:59:33.022453 | orchestrator | 2025-10-08 15:59:33 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 15:59:33.023962 | orchestrator | 2025-10-08 15:59:33 | INFO  | Task 11ec0c4c-b55f-49e3-9b7e-291373ba2e0e is in state STARTED 2025-10-08 15:59:33.024008 | orchestrator | 2025-10-08 15:59:33 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:59:36.067326 | orchestrator | 2025-10-08 15:59:36 | INFO  | Task fc80fd5e-d164-4975-92f7-1cf324d89ea3 is in state STARTED 2025-10-08 15:59:36.068083 | orchestrator | 2025-10-08 15:59:36 | INFO  | Task c9acb2ec-d69c-41f2-b09a-cfb2bdbcdae2 is in state STARTED 2025-10-08 15:59:36.069204 | orchestrator | 2025-10-08 15:59:36 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 15:59:36.070395 | orchestrator | 2025-10-08 15:59:36 | INFO  | Task 11ec0c4c-b55f-49e3-9b7e-291373ba2e0e is in state STARTED 2025-10-08 15:59:36.070615 | orchestrator | 2025-10-08 15:59:36 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:59:39.115344 | orchestrator | 2025-10-08 15:59:39 | INFO  | Task fc80fd5e-d164-4975-92f7-1cf324d89ea3 is in state STARTED 2025-10-08 15:59:39.117424 | orchestrator | 2025-10-08 15:59:39 | INFO  | Task c9acb2ec-d69c-41f2-b09a-cfb2bdbcdae2 is in state STARTED 2025-10-08 15:59:39.118277 | orchestrator | 2025-10-08 15:59:39 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 15:59:39.119602 | orchestrator | 2025-10-08 15:59:39 | INFO  | Task 11ec0c4c-b55f-49e3-9b7e-291373ba2e0e is in state STARTED 2025-10-08 15:59:39.119619 | orchestrator | 2025-10-08 15:59:39 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:59:42.168031 | orchestrator | 2025-10-08 15:59:42 | INFO  | Task fc80fd5e-d164-4975-92f7-1cf324d89ea3 is in state STARTED 2025-10-08 15:59:42.170323 | orchestrator | 2025-10-08 15:59:42 | INFO  | Task c9acb2ec-d69c-41f2-b09a-cfb2bdbcdae2 is in state STARTED 2025-10-08 15:59:42.172379 | orchestrator | 2025-10-08 15:59:42 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 15:59:42.174577 | orchestrator | 2025-10-08 15:59:42 | INFO  | Task 11ec0c4c-b55f-49e3-9b7e-291373ba2e0e is in state STARTED 2025-10-08 15:59:42.174617 | orchestrator | 2025-10-08 15:59:42 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:59:45.225085 | orchestrator | 2025-10-08 15:59:45 | INFO  | Task fc80fd5e-d164-4975-92f7-1cf324d89ea3 is in state STARTED 2025-10-08 15:59:45.226556 | orchestrator | 2025-10-08 15:59:45 | INFO  | Task c9acb2ec-d69c-41f2-b09a-cfb2bdbcdae2 is in state STARTED 2025-10-08 15:59:45.228430 | orchestrator | 2025-10-08 15:59:45 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 15:59:45.230087 | orchestrator | 2025-10-08 15:59:45 | INFO  | Task 11ec0c4c-b55f-49e3-9b7e-291373ba2e0e is in state STARTED 2025-10-08 15:59:45.230189 | orchestrator | 2025-10-08 15:59:45 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:59:48.281476 | orchestrator | 2025-10-08 15:59:48 | INFO  | Task fc80fd5e-d164-4975-92f7-1cf324d89ea3 is in state STARTED 2025-10-08 15:59:48.281830 | orchestrator | 2025-10-08 15:59:48 | INFO  | Task c9acb2ec-d69c-41f2-b09a-cfb2bdbcdae2 is in state STARTED 2025-10-08 15:59:48.282646 | orchestrator | 2025-10-08 15:59:48 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 15:59:48.284024 | orchestrator | 2025-10-08 15:59:48 | INFO  | Task 11ec0c4c-b55f-49e3-9b7e-291373ba2e0e is in state STARTED 2025-10-08 15:59:48.284044 | orchestrator | 2025-10-08 15:59:48 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:59:51.323523 | orchestrator | 2025-10-08 15:59:51 | INFO  | Task fc80fd5e-d164-4975-92f7-1cf324d89ea3 is in state STARTED 2025-10-08 15:59:51.323817 | orchestrator | 2025-10-08 15:59:51 | INFO  | Task c9acb2ec-d69c-41f2-b09a-cfb2bdbcdae2 is in state STARTED 2025-10-08 15:59:51.324748 | orchestrator | 2025-10-08 15:59:51 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 15:59:51.325517 | orchestrator | 2025-10-08 15:59:51 | INFO  | Task 11ec0c4c-b55f-49e3-9b7e-291373ba2e0e is in state STARTED 2025-10-08 15:59:51.325534 | orchestrator | 2025-10-08 15:59:51 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:59:54.361535 | orchestrator | 2025-10-08 15:59:54 | INFO  | Task fc80fd5e-d164-4975-92f7-1cf324d89ea3 is in state STARTED 2025-10-08 15:59:54.362195 | orchestrator | 2025-10-08 15:59:54 | INFO  | Task c9acb2ec-d69c-41f2-b09a-cfb2bdbcdae2 is in state STARTED 2025-10-08 15:59:54.363080 | orchestrator | 2025-10-08 15:59:54 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 15:59:54.363971 | orchestrator | 2025-10-08 15:59:54 | INFO  | Task 11ec0c4c-b55f-49e3-9b7e-291373ba2e0e is in state STARTED 2025-10-08 15:59:54.364019 | orchestrator | 2025-10-08 15:59:54 | INFO  | Wait 1 second(s) until the next check 2025-10-08 15:59:57.395315 | orchestrator | 2025-10-08 15:59:57 | INFO  | Task fc80fd5e-d164-4975-92f7-1cf324d89ea3 is in state STARTED 2025-10-08 15:59:57.395457 | orchestrator | 2025-10-08 15:59:57 | INFO  | Task c9acb2ec-d69c-41f2-b09a-cfb2bdbcdae2 is in state STARTED 2025-10-08 15:59:57.395750 | orchestrator | 2025-10-08 15:59:57 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 15:59:57.396439 | orchestrator | 2025-10-08 15:59:57 | INFO  | Task 11ec0c4c-b55f-49e3-9b7e-291373ba2e0e is in state STARTED 2025-10-08 15:59:57.396467 | orchestrator | 2025-10-08 15:59:57 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:00:00.430364 | orchestrator | 2025-10-08 16:00:00 | INFO  | Task fc80fd5e-d164-4975-92f7-1cf324d89ea3 is in state STARTED 2025-10-08 16:00:00.431283 | orchestrator | 2025-10-08 16:00:00 | INFO  | Task c9acb2ec-d69c-41f2-b09a-cfb2bdbcdae2 is in state STARTED 2025-10-08 16:00:00.432548 | orchestrator | 2025-10-08 16:00:00 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 16:00:00.432738 | orchestrator | 2025-10-08 16:00:00 | INFO  | Task 11ec0c4c-b55f-49e3-9b7e-291373ba2e0e is in state STARTED 2025-10-08 16:00:00.432973 | orchestrator | 2025-10-08 16:00:00 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:00:03.474999 | orchestrator | 2025-10-08 16:00:03 | INFO  | Task fc80fd5e-d164-4975-92f7-1cf324d89ea3 is in state STARTED 2025-10-08 16:00:03.475656 | orchestrator | 2025-10-08 16:00:03 | INFO  | Task c9acb2ec-d69c-41f2-b09a-cfb2bdbcdae2 is in state STARTED 2025-10-08 16:00:03.476186 | orchestrator | 2025-10-08 16:00:03 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 16:00:03.476875 | orchestrator | 2025-10-08 16:00:03 | INFO  | Task 11ec0c4c-b55f-49e3-9b7e-291373ba2e0e is in state STARTED 2025-10-08 16:00:03.479917 | orchestrator | 2025-10-08 16:00:03 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:00:06.525092 | orchestrator | 2025-10-08 16:00:06 | INFO  | Task fc80fd5e-d164-4975-92f7-1cf324d89ea3 is in state STARTED 2025-10-08 16:00:06.527344 | orchestrator | 2025-10-08 16:00:06 | INFO  | Task c9acb2ec-d69c-41f2-b09a-cfb2bdbcdae2 is in state STARTED 2025-10-08 16:00:06.529363 | orchestrator | 2025-10-08 16:00:06 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 16:00:06.531527 | orchestrator | 2025-10-08 16:00:06 | INFO  | Task 11ec0c4c-b55f-49e3-9b7e-291373ba2e0e is in state STARTED 2025-10-08 16:00:06.532060 | orchestrator | 2025-10-08 16:00:06 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:00:09.578112 | orchestrator | 2025-10-08 16:00:09 | INFO  | Task fc80fd5e-d164-4975-92f7-1cf324d89ea3 is in state STARTED 2025-10-08 16:00:09.579854 | orchestrator | 2025-10-08 16:00:09 | INFO  | Task c9acb2ec-d69c-41f2-b09a-cfb2bdbcdae2 is in state STARTED 2025-10-08 16:00:09.580547 | orchestrator | 2025-10-08 16:00:09 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 16:00:09.582662 | orchestrator | 2025-10-08 16:00:09 | INFO  | Task 11ec0c4c-b55f-49e3-9b7e-291373ba2e0e is in state STARTED 2025-10-08 16:00:09.582688 | orchestrator | 2025-10-08 16:00:09 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:00:12.629043 | orchestrator | 2025-10-08 16:00:12 | INFO  | Task fc80fd5e-d164-4975-92f7-1cf324d89ea3 is in state STARTED 2025-10-08 16:00:12.629176 | orchestrator | 2025-10-08 16:00:12 | INFO  | Task c9acb2ec-d69c-41f2-b09a-cfb2bdbcdae2 is in state STARTED 2025-10-08 16:00:12.629192 | orchestrator | 2025-10-08 16:00:12 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 16:00:12.629214 | orchestrator | 2025-10-08 16:00:12 | INFO  | Task 11ec0c4c-b55f-49e3-9b7e-291373ba2e0e is in state STARTED 2025-10-08 16:00:12.629226 | orchestrator | 2025-10-08 16:00:12 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:00:15.665449 | orchestrator | 2025-10-08 16:00:15 | INFO  | Task fc80fd5e-d164-4975-92f7-1cf324d89ea3 is in state STARTED 2025-10-08 16:00:15.665672 | orchestrator | 2025-10-08 16:00:15 | INFO  | Task c9acb2ec-d69c-41f2-b09a-cfb2bdbcdae2 is in state STARTED 2025-10-08 16:00:15.666390 | orchestrator | 2025-10-08 16:00:15 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 16:00:15.667110 | orchestrator | 2025-10-08 16:00:15 | INFO  | Task 11ec0c4c-b55f-49e3-9b7e-291373ba2e0e is in state STARTED 2025-10-08 16:00:15.667308 | orchestrator | 2025-10-08 16:00:15 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:00:18.701236 | orchestrator | 2025-10-08 16:00:18 | INFO  | Task fc80fd5e-d164-4975-92f7-1cf324d89ea3 is in state STARTED 2025-10-08 16:00:18.701357 | orchestrator | 2025-10-08 16:00:18 | INFO  | Task c9acb2ec-d69c-41f2-b09a-cfb2bdbcdae2 is in state STARTED 2025-10-08 16:00:18.701760 | orchestrator | 2025-10-08 16:00:18 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 16:00:18.702349 | orchestrator | 2025-10-08 16:00:18 | INFO  | Task 11ec0c4c-b55f-49e3-9b7e-291373ba2e0e is in state STARTED 2025-10-08 16:00:18.702375 | orchestrator | 2025-10-08 16:00:18 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:00:21.730300 | orchestrator | 2025-10-08 16:00:21 | INFO  | Task fc80fd5e-d164-4975-92f7-1cf324d89ea3 is in state STARTED 2025-10-08 16:00:21.731170 | orchestrator | 2025-10-08 16:00:21 | INFO  | Task c9acb2ec-d69c-41f2-b09a-cfb2bdbcdae2 is in state STARTED 2025-10-08 16:00:21.734343 | orchestrator | 2025-10-08 16:00:21 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 16:00:21.736556 | orchestrator | 2025-10-08 16:00:21 | INFO  | Task 11ec0c4c-b55f-49e3-9b7e-291373ba2e0e is in state STARTED 2025-10-08 16:00:21.737040 | orchestrator | 2025-10-08 16:00:21 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:00:24.781040 | orchestrator | 2025-10-08 16:00:24 | INFO  | Task fc80fd5e-d164-4975-92f7-1cf324d89ea3 is in state STARTED 2025-10-08 16:00:24.783642 | orchestrator | 2025-10-08 16:00:24 | INFO  | Task c9acb2ec-d69c-41f2-b09a-cfb2bdbcdae2 is in state STARTED 2025-10-08 16:00:24.786592 | orchestrator | 2025-10-08 16:00:24 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 16:00:24.788212 | orchestrator | 2025-10-08 16:00:24 | INFO  | Task 11ec0c4c-b55f-49e3-9b7e-291373ba2e0e is in state STARTED 2025-10-08 16:00:24.788230 | orchestrator | 2025-10-08 16:00:24 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:00:27.826085 | orchestrator | 2025-10-08 16:00:27 | INFO  | Task fc80fd5e-d164-4975-92f7-1cf324d89ea3 is in state STARTED 2025-10-08 16:00:27.827475 | orchestrator | 2025-10-08 16:00:27 | INFO  | Task c9acb2ec-d69c-41f2-b09a-cfb2bdbcdae2 is in state STARTED 2025-10-08 16:00:27.828948 | orchestrator | 2025-10-08 16:00:27 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 16:00:27.830286 | orchestrator | 2025-10-08 16:00:27 | INFO  | Task 11ec0c4c-b55f-49e3-9b7e-291373ba2e0e is in state STARTED 2025-10-08 16:00:27.830312 | orchestrator | 2025-10-08 16:00:27 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:00:30.873228 | orchestrator | 2025-10-08 16:00:30 | INFO  | Task fc80fd5e-d164-4975-92f7-1cf324d89ea3 is in state STARTED 2025-10-08 16:00:30.875452 | orchestrator | 2025-10-08 16:00:30 | INFO  | Task c9acb2ec-d69c-41f2-b09a-cfb2bdbcdae2 is in state STARTED 2025-10-08 16:00:30.881723 | orchestrator | 2025-10-08 16:00:30 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 16:00:30.884104 | orchestrator | 2025-10-08 16:00:30 | INFO  | Task 11ec0c4c-b55f-49e3-9b7e-291373ba2e0e is in state STARTED 2025-10-08 16:00:30.884126 | orchestrator | 2025-10-08 16:00:30 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:00:33.922921 | orchestrator | 2025-10-08 16:00:33 | INFO  | Task fc80fd5e-d164-4975-92f7-1cf324d89ea3 is in state STARTED 2025-10-08 16:00:33.925241 | orchestrator | 2025-10-08 16:00:33 | INFO  | Task c9acb2ec-d69c-41f2-b09a-cfb2bdbcdae2 is in state STARTED 2025-10-08 16:00:33.928427 | orchestrator | 2025-10-08 16:00:33 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 16:00:33.930423 | orchestrator | 2025-10-08 16:00:33 | INFO  | Task 11ec0c4c-b55f-49e3-9b7e-291373ba2e0e is in state STARTED 2025-10-08 16:00:33.930451 | orchestrator | 2025-10-08 16:00:33 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:00:36.983666 | orchestrator | 2025-10-08 16:00:36 | INFO  | Task fc80fd5e-d164-4975-92f7-1cf324d89ea3 is in state STARTED 2025-10-08 16:00:36.987225 | orchestrator | 2025-10-08 16:00:36 | INFO  | Task c9acb2ec-d69c-41f2-b09a-cfb2bdbcdae2 is in state STARTED 2025-10-08 16:00:36.990165 | orchestrator | 2025-10-08 16:00:36 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 16:00:36.992451 | orchestrator | 2025-10-08 16:00:36 | INFO  | Task 11ec0c4c-b55f-49e3-9b7e-291373ba2e0e is in state STARTED 2025-10-08 16:00:36.992475 | orchestrator | 2025-10-08 16:00:36 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:00:40.028078 | orchestrator | 2025-10-08 16:00:40 | INFO  | Task fc80fd5e-d164-4975-92f7-1cf324d89ea3 is in state STARTED 2025-10-08 16:00:40.028663 | orchestrator | 2025-10-08 16:00:40 | INFO  | Task c9acb2ec-d69c-41f2-b09a-cfb2bdbcdae2 is in state STARTED 2025-10-08 16:00:40.030364 | orchestrator | 2025-10-08 16:00:40 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 16:00:40.031325 | orchestrator | 2025-10-08 16:00:40 | INFO  | Task 11ec0c4c-b55f-49e3-9b7e-291373ba2e0e is in state STARTED 2025-10-08 16:00:40.031351 | orchestrator | 2025-10-08 16:00:40 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:00:43.058263 | orchestrator | 2025-10-08 16:00:43 | INFO  | Task fc80fd5e-d164-4975-92f7-1cf324d89ea3 is in state STARTED 2025-10-08 16:00:43.058381 | orchestrator | 2025-10-08 16:00:43 | INFO  | Task c9acb2ec-d69c-41f2-b09a-cfb2bdbcdae2 is in state STARTED 2025-10-08 16:00:43.059002 | orchestrator | 2025-10-08 16:00:43 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 16:00:43.059498 | orchestrator | 2025-10-08 16:00:43 | INFO  | Task 11ec0c4c-b55f-49e3-9b7e-291373ba2e0e is in state STARTED 2025-10-08 16:00:43.059531 | orchestrator | 2025-10-08 16:00:43 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:00:46.079162 | orchestrator | 2025-10-08 16:00:46 | INFO  | Task fc80fd5e-d164-4975-92f7-1cf324d89ea3 is in state STARTED 2025-10-08 16:00:46.080075 | orchestrator | 2025-10-08 16:00:46 | INFO  | Task c9acb2ec-d69c-41f2-b09a-cfb2bdbcdae2 is in state STARTED 2025-10-08 16:00:46.080788 | orchestrator | 2025-10-08 16:00:46 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 16:00:46.081938 | orchestrator | 2025-10-08 16:00:46 | INFO  | Task 11ec0c4c-b55f-49e3-9b7e-291373ba2e0e is in state STARTED 2025-10-08 16:00:46.082175 | orchestrator | 2025-10-08 16:00:46 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:00:49.120643 | orchestrator | 2025-10-08 16:00:49 | INFO  | Task fc80fd5e-d164-4975-92f7-1cf324d89ea3 is in state STARTED 2025-10-08 16:00:49.121806 | orchestrator | 2025-10-08 16:00:49 | INFO  | Task c9acb2ec-d69c-41f2-b09a-cfb2bdbcdae2 is in state STARTED 2025-10-08 16:00:49.122884 | orchestrator | 2025-10-08 16:00:49 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 16:00:49.124762 | orchestrator | 2025-10-08 16:00:49 | INFO  | Task 11ec0c4c-b55f-49e3-9b7e-291373ba2e0e is in state STARTED 2025-10-08 16:00:49.124988 | orchestrator | 2025-10-08 16:00:49 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:00:52.168045 | orchestrator | 2025-10-08 16:00:52 | INFO  | Task fc80fd5e-d164-4975-92f7-1cf324d89ea3 is in state STARTED 2025-10-08 16:00:52.168844 | orchestrator | 2025-10-08 16:00:52 | INFO  | Task c9acb2ec-d69c-41f2-b09a-cfb2bdbcdae2 is in state STARTED 2025-10-08 16:00:52.172244 | orchestrator | 2025-10-08 16:00:52 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 16:00:52.172270 | orchestrator | 2025-10-08 16:00:52 | INFO  | Task 11ec0c4c-b55f-49e3-9b7e-291373ba2e0e is in state STARTED 2025-10-08 16:00:52.172282 | orchestrator | 2025-10-08 16:00:52 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:00:55.220127 | orchestrator | 2025-10-08 16:00:55 | INFO  | Task fc80fd5e-d164-4975-92f7-1cf324d89ea3 is in state STARTED 2025-10-08 16:00:55.220548 | orchestrator | 2025-10-08 16:00:55 | INFO  | Task c9acb2ec-d69c-41f2-b09a-cfb2bdbcdae2 is in state STARTED 2025-10-08 16:00:55.222291 | orchestrator | 2025-10-08 16:00:55 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 16:00:55.223362 | orchestrator | 2025-10-08 16:00:55 | INFO  | Task 11ec0c4c-b55f-49e3-9b7e-291373ba2e0e is in state STARTED 2025-10-08 16:00:55.223396 | orchestrator | 2025-10-08 16:00:55 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:00:58.265986 | orchestrator | 2025-10-08 16:00:58 | INFO  | Task fc80fd5e-d164-4975-92f7-1cf324d89ea3 is in state STARTED 2025-10-08 16:00:58.267347 | orchestrator | 2025-10-08 16:00:58 | INFO  | Task c9acb2ec-d69c-41f2-b09a-cfb2bdbcdae2 is in state STARTED 2025-10-08 16:00:58.268707 | orchestrator | 2025-10-08 16:00:58 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 16:00:58.269919 | orchestrator | 2025-10-08 16:00:58 | INFO  | Task 11ec0c4c-b55f-49e3-9b7e-291373ba2e0e is in state STARTED 2025-10-08 16:00:58.269947 | orchestrator | 2025-10-08 16:00:58 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:01:01.352632 | orchestrator | 2025-10-08 16:01:01 | INFO  | Task fc80fd5e-d164-4975-92f7-1cf324d89ea3 is in state STARTED 2025-10-08 16:01:01.353109 | orchestrator | 2025-10-08 16:01:01 | INFO  | Task c9acb2ec-d69c-41f2-b09a-cfb2bdbcdae2 is in state STARTED 2025-10-08 16:01:01.353197 | orchestrator | 2025-10-08 16:01:01 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 16:01:01.354119 | orchestrator | 2025-10-08 16:01:01 | INFO  | Task 11ec0c4c-b55f-49e3-9b7e-291373ba2e0e is in state STARTED 2025-10-08 16:01:01.354168 | orchestrator | 2025-10-08 16:01:01 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:01:04.386423 | orchestrator | 2025-10-08 16:01:04 | INFO  | Task fc80fd5e-d164-4975-92f7-1cf324d89ea3 is in state STARTED 2025-10-08 16:01:04.387209 | orchestrator | 2025-10-08 16:01:04 | INFO  | Task c9acb2ec-d69c-41f2-b09a-cfb2bdbcdae2 is in state STARTED 2025-10-08 16:01:04.388362 | orchestrator | 2025-10-08 16:01:04 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 16:01:04.391598 | orchestrator | 2025-10-08 16:01:04 | INFO  | Task 11ec0c4c-b55f-49e3-9b7e-291373ba2e0e is in state SUCCESS 2025-10-08 16:01:04.393051 | orchestrator | 2025-10-08 16:01:04.393083 | orchestrator | 2025-10-08 16:01:04.393096 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-10-08 16:01:04.393109 | orchestrator | 2025-10-08 16:01:04.393121 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-10-08 16:01:04.393132 | orchestrator | Wednesday 08 October 2025 15:56:55 +0000 (0:00:00.282) 0:00:00.282 ***** 2025-10-08 16:01:04.393176 | orchestrator | ok: [testbed-node-0] 2025-10-08 16:01:04.393189 | orchestrator | ok: [testbed-node-1] 2025-10-08 16:01:04.393200 | orchestrator | ok: [testbed-node-2] 2025-10-08 16:01:04.393294 | orchestrator | ok: [testbed-node-3] 2025-10-08 16:01:04.393311 | orchestrator | ok: [testbed-node-4] 2025-10-08 16:01:04.393856 | orchestrator | ok: [testbed-node-5] 2025-10-08 16:01:04.393878 | orchestrator | 2025-10-08 16:01:04.393889 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-10-08 16:01:04.393900 | orchestrator | Wednesday 08 October 2025 15:56:56 +0000 (0:00:01.481) 0:00:01.764 ***** 2025-10-08 16:01:04.393911 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2025-10-08 16:01:04.393923 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2025-10-08 16:01:04.393934 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2025-10-08 16:01:04.393945 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2025-10-08 16:01:04.393955 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2025-10-08 16:01:04.393966 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2025-10-08 16:01:04.393977 | orchestrator | 2025-10-08 16:01:04.393988 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2025-10-08 16:01:04.393999 | orchestrator | 2025-10-08 16:01:04.394079 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-10-08 16:01:04.394094 | orchestrator | Wednesday 08 October 2025 15:56:57 +0000 (0:00:00.597) 0:00:02.361 ***** 2025-10-08 16:01:04.394107 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-08 16:01:04.394119 | orchestrator | 2025-10-08 16:01:04.394130 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2025-10-08 16:01:04.394176 | orchestrator | Wednesday 08 October 2025 15:56:58 +0000 (0:00:01.113) 0:00:03.475 ***** 2025-10-08 16:01:04.394209 | orchestrator | ok: [testbed-node-1] 2025-10-08 16:01:04.394221 | orchestrator | ok: [testbed-node-0] 2025-10-08 16:01:04.394232 | orchestrator | ok: [testbed-node-2] 2025-10-08 16:01:04.394242 | orchestrator | ok: [testbed-node-3] 2025-10-08 16:01:04.394253 | orchestrator | ok: [testbed-node-4] 2025-10-08 16:01:04.394264 | orchestrator | ok: [testbed-node-5] 2025-10-08 16:01:04.394275 | orchestrator | 2025-10-08 16:01:04.394286 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2025-10-08 16:01:04.394296 | orchestrator | Wednesday 08 October 2025 15:56:59 +0000 (0:00:01.219) 0:00:04.694 ***** 2025-10-08 16:01:04.394308 | orchestrator | ok: [testbed-node-0] 2025-10-08 16:01:04.394319 | orchestrator | ok: [testbed-node-1] 2025-10-08 16:01:04.394330 | orchestrator | ok: [testbed-node-2] 2025-10-08 16:01:04.394341 | orchestrator | ok: [testbed-node-3] 2025-10-08 16:01:04.394351 | orchestrator | ok: [testbed-node-4] 2025-10-08 16:01:04.394362 | orchestrator | ok: [testbed-node-5] 2025-10-08 16:01:04.394373 | orchestrator | 2025-10-08 16:01:04.394384 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2025-10-08 16:01:04.394395 | orchestrator | Wednesday 08 October 2025 15:57:00 +0000 (0:00:01.129) 0:00:05.824 ***** 2025-10-08 16:01:04.394406 | orchestrator | ok: [testbed-node-0] => { 2025-10-08 16:01:04.394418 | orchestrator |  "changed": false, 2025-10-08 16:01:04.394429 | orchestrator |  "msg": "All assertions passed" 2025-10-08 16:01:04.394440 | orchestrator | } 2025-10-08 16:01:04.394452 | orchestrator | ok: [testbed-node-1] => { 2025-10-08 16:01:04.394463 | orchestrator |  "changed": false, 2025-10-08 16:01:04.394476 | orchestrator |  "msg": "All assertions passed" 2025-10-08 16:01:04.394489 | orchestrator | } 2025-10-08 16:01:04.394501 | orchestrator | ok: [testbed-node-2] => { 2025-10-08 16:01:04.394515 | orchestrator |  "changed": false, 2025-10-08 16:01:04.394527 | orchestrator |  "msg": "All assertions passed" 2025-10-08 16:01:04.394540 | orchestrator | } 2025-10-08 16:01:04.394552 | orchestrator | ok: [testbed-node-3] => { 2025-10-08 16:01:04.394565 | orchestrator |  "changed": false, 2025-10-08 16:01:04.394577 | orchestrator |  "msg": "All assertions passed" 2025-10-08 16:01:04.394590 | orchestrator | } 2025-10-08 16:01:04.394602 | orchestrator | ok: [testbed-node-4] => { 2025-10-08 16:01:04.394614 | orchestrator |  "changed": false, 2025-10-08 16:01:04.394626 | orchestrator |  "msg": "All assertions passed" 2025-10-08 16:01:04.394638 | orchestrator | } 2025-10-08 16:01:04.394650 | orchestrator | ok: [testbed-node-5] => { 2025-10-08 16:01:04.394662 | orchestrator |  "changed": false, 2025-10-08 16:01:04.394675 | orchestrator |  "msg": "All assertions passed" 2025-10-08 16:01:04.394687 | orchestrator | } 2025-10-08 16:01:04.394699 | orchestrator | 2025-10-08 16:01:04.394712 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2025-10-08 16:01:04.394725 | orchestrator | Wednesday 08 October 2025 15:57:01 +0000 (0:00:00.780) 0:00:06.605 ***** 2025-10-08 16:01:04.394752 | orchestrator | skipping: [testbed-node-0] 2025-10-08 16:01:04.394765 | orchestrator | skipping: [testbed-node-1] 2025-10-08 16:01:04.394777 | orchestrator | skipping: [testbed-node-2] 2025-10-08 16:01:04.394790 | orchestrator | skipping: [testbed-node-3] 2025-10-08 16:01:04.394803 | orchestrator | skipping: [testbed-node-4] 2025-10-08 16:01:04.394815 | orchestrator | skipping: [testbed-node-5] 2025-10-08 16:01:04.394826 | orchestrator | 2025-10-08 16:01:04.394837 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2025-10-08 16:01:04.394857 | orchestrator | Wednesday 08 October 2025 15:57:01 +0000 (0:00:00.639) 0:00:07.245 ***** 2025-10-08 16:01:04.394869 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2025-10-08 16:01:04.394880 | orchestrator | 2025-10-08 16:01:04.394891 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2025-10-08 16:01:04.394902 | orchestrator | Wednesday 08 October 2025 15:57:05 +0000 (0:00:03.509) 0:00:10.754 ***** 2025-10-08 16:01:04.394913 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2025-10-08 16:01:04.394925 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2025-10-08 16:01:04.394936 | orchestrator | 2025-10-08 16:01:04.394997 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2025-10-08 16:01:04.395011 | orchestrator | Wednesday 08 October 2025 15:57:12 +0000 (0:00:06.575) 0:00:17.330 ***** 2025-10-08 16:01:04.395022 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-10-08 16:01:04.395033 | orchestrator | 2025-10-08 16:01:04.395044 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2025-10-08 16:01:04.395055 | orchestrator | Wednesday 08 October 2025 15:57:15 +0000 (0:00:03.306) 0:00:20.636 ***** 2025-10-08 16:01:04.395065 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-10-08 16:01:04.395076 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2025-10-08 16:01:04.395087 | orchestrator | 2025-10-08 16:01:04.395098 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2025-10-08 16:01:04.395109 | orchestrator | Wednesday 08 October 2025 15:57:19 +0000 (0:00:04.132) 0:00:24.769 ***** 2025-10-08 16:01:04.395120 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-10-08 16:01:04.395130 | orchestrator | 2025-10-08 16:01:04.395160 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2025-10-08 16:01:04.395171 | orchestrator | Wednesday 08 October 2025 15:57:23 +0000 (0:00:03.568) 0:00:28.338 ***** 2025-10-08 16:01:04.395182 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2025-10-08 16:01:04.395193 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2025-10-08 16:01:04.395204 | orchestrator | 2025-10-08 16:01:04.395215 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-10-08 16:01:04.395226 | orchestrator | Wednesday 08 October 2025 15:57:31 +0000 (0:00:08.221) 0:00:36.560 ***** 2025-10-08 16:01:04.395237 | orchestrator | skipping: [testbed-node-0] 2025-10-08 16:01:04.395248 | orchestrator | skipping: [testbed-node-1] 2025-10-08 16:01:04.395259 | orchestrator | skipping: [testbed-node-2] 2025-10-08 16:01:04.395269 | orchestrator | skipping: [testbed-node-3] 2025-10-08 16:01:04.395280 | orchestrator | skipping: [testbed-node-4] 2025-10-08 16:01:04.395290 | orchestrator | skipping: [testbed-node-5] 2025-10-08 16:01:04.395301 | orchestrator | 2025-10-08 16:01:04.395312 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2025-10-08 16:01:04.395323 | orchestrator | Wednesday 08 October 2025 15:57:32 +0000 (0:00:00.762) 0:00:37.322 ***** 2025-10-08 16:01:04.395334 | orchestrator | skipping: [testbed-node-1] 2025-10-08 16:01:04.395345 | orchestrator | skipping: [testbed-node-3] 2025-10-08 16:01:04.395355 | orchestrator | skipping: [testbed-node-5] 2025-10-08 16:01:04.395366 | orchestrator | skipping: [testbed-node-0] 2025-10-08 16:01:04.395377 | orchestrator | skipping: [testbed-node-2] 2025-10-08 16:01:04.395387 | orchestrator | skipping: [testbed-node-4] 2025-10-08 16:01:04.395398 | orchestrator | 2025-10-08 16:01:04.395409 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2025-10-08 16:01:04.395420 | orchestrator | Wednesday 08 October 2025 15:57:34 +0000 (0:00:02.063) 0:00:39.385 ***** 2025-10-08 16:01:04.395431 | orchestrator | ok: [testbed-node-0] 2025-10-08 16:01:04.395442 | orchestrator | ok: [testbed-node-1] 2025-10-08 16:01:04.395453 | orchestrator | ok: [testbed-node-3] 2025-10-08 16:01:04.395472 | orchestrator | ok: [testbed-node-2] 2025-10-08 16:01:04.395483 | orchestrator | ok: [testbed-node-4] 2025-10-08 16:01:04.395493 | orchestrator | ok: [testbed-node-5] 2025-10-08 16:01:04.395504 | orchestrator | 2025-10-08 16:01:04.395515 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-10-08 16:01:04.395526 | orchestrator | Wednesday 08 October 2025 15:57:35 +0000 (0:00:01.058) 0:00:40.444 ***** 2025-10-08 16:01:04.395537 | orchestrator | skipping: [testbed-node-1] 2025-10-08 16:01:04.395548 | orchestrator | skipping: [testbed-node-0] 2025-10-08 16:01:04.395559 | orchestrator | skipping: [testbed-node-2] 2025-10-08 16:01:04.395570 | orchestrator | skipping: [testbed-node-4] 2025-10-08 16:01:04.395580 | orchestrator | skipping: [testbed-node-3] 2025-10-08 16:01:04.395591 | orchestrator | skipping: [testbed-node-5] 2025-10-08 16:01:04.395602 | orchestrator | 2025-10-08 16:01:04.395613 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2025-10-08 16:01:04.395623 | orchestrator | Wednesday 08 October 2025 15:57:37 +0000 (0:00:02.184) 0:00:42.629 ***** 2025-10-08 16:01:04.395644 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-10-08 16:01:04.395695 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-10-08 16:01:04.395709 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-10-08 16:01:04.395723 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-10-08 16:01:04.395742 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-10-08 16:01:04.395758 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-10-08 16:01:04.395770 | orchestrator | 2025-10-08 16:01:04.395781 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2025-10-08 16:01:04.395793 | orchestrator | Wednesday 08 October 2025 15:57:40 +0000 (0:00:03.170) 0:00:45.799 ***** 2025-10-08 16:01:04.395804 | orchestrator | [WARNING]: Skipped 2025-10-08 16:01:04.395815 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2025-10-08 16:01:04.395827 | orchestrator | due to this access issue: 2025-10-08 16:01:04.395837 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2025-10-08 16:01:04.395848 | orchestrator | a directory 2025-10-08 16:01:04.395859 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-10-08 16:01:04.395870 | orchestrator | 2025-10-08 16:01:04.395881 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-10-08 16:01:04.395922 | orchestrator | Wednesday 08 October 2025 15:57:41 +0000 (0:00:00.873) 0:00:46.672 ***** 2025-10-08 16:01:04.395935 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-08 16:01:04.395948 | orchestrator | 2025-10-08 16:01:04.395959 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2025-10-08 16:01:04.395970 | orchestrator | Wednesday 08 October 2025 15:57:42 +0000 (0:00:01.297) 0:00:47.970 ***** 2025-10-08 16:01:04.395982 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-10-08 16:01:04.396001 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-10-08 16:01:04.396013 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-10-08 16:01:04.396030 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-10-08 16:01:04.396073 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-10-08 16:01:04.396086 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-10-08 16:01:04.396105 | orchestrator | 2025-10-08 16:01:04.396116 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2025-10-08 16:01:04.396127 | orchestrator | Wednesday 08 October 2025 15:57:46 +0000 (0:00:03.552) 0:00:51.522 ***** 2025-10-08 16:01:04.396191 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-10-08 16:01:04.396205 | orchestrator | skipping: [testbed-node-0] 2025-10-08 16:01:04.396217 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-10-08 16:01:04.396229 | orchestrator | skipping: [testbed-node-1] 2025-10-08 16:01:04.396246 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-10-08 16:01:04.396292 | orchestrator | skipping: [testbed-node-2] 2025-10-08 16:01:04.396306 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-10-08 16:01:04.396326 | orchestrator | skipping: [testbed-node-5] 2025-10-08 16:01:04.396337 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-10-08 16:01:04.396349 | orchestrator | skipping: [testbed-node-3] 2025-10-08 16:01:04.396360 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-10-08 16:01:04.396371 | orchestrator | skipping: [testbed-node-4] 2025-10-08 16:01:04.396382 | orchestrator | 2025-10-08 16:01:04.396393 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2025-10-08 16:01:04.396404 | orchestrator | Wednesday 08 October 2025 15:57:48 +0000 (0:00:02.592) 0:00:54.115 ***** 2025-10-08 16:01:04.396421 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-10-08 16:01:04.396433 | orchestrator | skipping: [testbed-node-1] 2025-10-08 16:01:04.396454 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-10-08 16:01:04.396472 | orchestrator | skipping: [testbed-node-0] 2025-10-08 16:01:04.396482 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-10-08 16:01:04.396492 | orchestrator | skipping: [testbed-node-2] 2025-10-08 16:01:04.396502 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-10-08 16:01:04.396512 | orchestrator | skipping: [testbed-node-3] 2025-10-08 16:01:04.396522 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-10-08 16:01:04.396532 | orchestrator | skipping: [testbed-node-5] 2025-10-08 16:01:04.396552 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-10-08 16:01:04.396562 | orchestrator | skipping: [testbed-node-4] 2025-10-08 16:01:04.396572 | orchestrator | 2025-10-08 16:01:04.396582 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2025-10-08 16:01:04.396591 | orchestrator | Wednesday 08 October 2025 15:57:51 +0000 (0:00:02.366) 0:00:56.482 ***** 2025-10-08 16:01:04.396601 | orchestrator | skipping: [testbed-node-0] 2025-10-08 16:01:04.396610 | orchestrator | skipping: [testbed-node-3] 2025-10-08 16:01:04.396620 | orchestrator | skipping: [testbed-node-2] 2025-10-08 16:01:04.396635 | orchestrator | skipping: [testbed-node-1] 2025-10-08 16:01:04.396645 | orchestrator | skipping: [testbed-node-5] 2025-10-08 16:01:04.396654 | orchestrator | skipping: [testbed-node-4] 2025-10-08 16:01:04.396664 | orchestrator | 2025-10-08 16:01:04.396673 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2025-10-08 16:01:04.396690 | orchestrator | Wednesday 08 October 2025 15:57:52 +0000 (0:00:01.697) 0:00:58.180 ***** 2025-10-08 16:01:04.396700 | orchestrator | skipping: [testbed-node-0] 2025-10-08 16:01:04.396710 | orchestrator | 2025-10-08 16:01:04.396720 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2025-10-08 16:01:04.396730 | orchestrator | Wednesday 08 October 2025 15:57:53 +0000 (0:00:00.124) 0:00:58.304 ***** 2025-10-08 16:01:04.396739 | orchestrator | skipping: [testbed-node-0] 2025-10-08 16:01:04.396749 | orchestrator | skipping: [testbed-node-1] 2025-10-08 16:01:04.396758 | orchestrator | skipping: [testbed-node-2] 2025-10-08 16:01:04.396768 | orchestrator | skipping: [testbed-node-3] 2025-10-08 16:01:04.396778 | orchestrator | skipping: [testbed-node-4] 2025-10-08 16:01:04.396787 | orchestrator | skipping: [testbed-node-5] 2025-10-08 16:01:04.396797 | orchestrator | 2025-10-08 16:01:04.396807 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2025-10-08 16:01:04.396817 | orchestrator | Wednesday 08 October 2025 15:57:53 +0000 (0:00:00.762) 0:00:59.066 ***** 2025-10-08 16:01:04.396827 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-10-08 16:01:04.396837 | orchestrator | skipping: [testbed-node-0] 2025-10-08 16:01:04.396847 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-10-08 16:01:04.396857 | orchestrator | skipping: [testbed-node-4] 2025-10-08 16:01:04.396871 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-10-08 16:01:04.396893 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-10-08 16:01:04.396904 | orchestrator | skipping: [testbed-node-2] 2025-10-08 16:01:04.396913 | orchestrator | skipping: [testbed-node-1] 2025-10-08 16:01:04.396923 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-10-08 16:01:04.396933 | orchestrator | skipping: [testbed-node-3] 2025-10-08 16:01:04.396943 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-10-08 16:01:04.396953 | orchestrator | skipping: [testbed-node-5] 2025-10-08 16:01:04.396963 | orchestrator | 2025-10-08 16:01:04.396972 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2025-10-08 16:01:04.396982 | orchestrator | Wednesday 08 October 2025 15:57:57 +0000 (0:00:03.772) 0:01:02.839 ***** 2025-10-08 16:01:04.396992 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-10-08 16:01:04.397003 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-10-08 16:01:04.397025 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-10-08 16:01:04.397036 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-10-08 16:01:04.397082 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-10-08 16:01:04.397093 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-10-08 16:01:04.397103 | orchestrator | 2025-10-08 16:01:04.397119 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2025-10-08 16:01:04.397129 | orchestrator | Wednesday 08 October 2025 15:58:02 +0000 (0:00:04.473) 0:01:07.313 ***** 2025-10-08 16:01:04.397162 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-10-08 16:01:04.397182 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-10-08 16:01:04.397192 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-10-08 16:01:04.397203 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-10-08 16:01:04.397213 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-10-08 16:01:04.397234 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-10-08 16:01:04.397244 | orchestrator | 2025-10-08 16:01:04.397254 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2025-10-08 16:01:04.397264 | orchestrator | Wednesday 08 October 2025 15:58:09 +0000 (0:00:07.055) 0:01:14.369 ***** 2025-10-08 16:01:04.397282 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-10-08 16:01:04.397292 | orchestrator | skipping: [testbed-node-1] 2025-10-08 16:01:04.397302 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-10-08 16:01:04.397312 | orchestrator | skipping: [testbed-node-0] 2025-10-08 16:01:04.397322 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-10-08 16:01:04.397338 | orchestrator | skipping: [testbed-node-2] 2025-10-08 16:01:04.397352 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-10-08 16:01:04.397362 | orchestrator | skipping: [testbed-node-3] 2025-10-08 16:01:04.397372 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-10-08 16:01:04.397383 | orchestrator | skipping: [testbed-node-5] 2025-10-08 16:01:04.397399 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-10-08 16:01:04.397409 | orchestrator | skipping: [testbed-node-4] 2025-10-08 16:01:04.397419 | orchestrator | 2025-10-08 16:01:04.397429 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2025-10-08 16:01:04.397439 | orchestrator | Wednesday 08 October 2025 15:58:11 +0000 (0:00:02.709) 0:01:17.078 ***** 2025-10-08 16:01:04.397448 | orchestrator | skipping: [testbed-node-3] 2025-10-08 16:01:04.397458 | orchestrator | changed: [testbed-node-0] 2025-10-08 16:01:04.397468 | orchestrator | skipping: [testbed-node-4] 2025-10-08 16:01:04.397478 | orchestrator | skipping: [testbed-node-5] 2025-10-08 16:01:04.397487 | orchestrator | changed: [testbed-node-1] 2025-10-08 16:01:04.397497 | orchestrator | changed: [testbed-node-2] 2025-10-08 16:01:04.397506 | orchestrator | 2025-10-08 16:01:04.397516 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2025-10-08 16:01:04.397526 | orchestrator | Wednesday 08 October 2025 15:58:15 +0000 (0:00:03.537) 0:01:20.615 ***** 2025-10-08 16:01:04.397536 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-10-08 16:01:04.397552 | orchestrator | skipping: [testbed-node-3] 2025-10-08 16:01:04.397562 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-10-08 16:01:04.397572 | orchestrator | skipping: [testbed-node-5] 2025-10-08 16:01:04.397586 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-10-08 16:01:04.397596 | orchestrator | skipping: [testbed-node-4] 2025-10-08 16:01:04.397614 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-10-08 16:01:04.397625 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-10-08 16:01:04.397635 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-10-08 16:01:04.397651 | orchestrator | 2025-10-08 16:01:04.397660 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2025-10-08 16:01:04.397670 | orchestrator | Wednesday 08 October 2025 15:58:19 +0000 (0:00:04.537) 0:01:25.152 ***** 2025-10-08 16:01:04.397680 | orchestrator | skipping: [testbed-node-0] 2025-10-08 16:01:04.397690 | orchestrator | skipping: [testbed-node-2] 2025-10-08 16:01:04.397699 | orchestrator | skipping: [testbed-node-1] 2025-10-08 16:01:04.397709 | orchestrator | skipping: [testbed-node-5] 2025-10-08 16:01:04.397719 | orchestrator | skipping: [testbed-node-3] 2025-10-08 16:01:04.397728 | orchestrator | skipping: [testbed-node-4] 2025-10-08 16:01:04.397738 | orchestrator | 2025-10-08 16:01:04.397747 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2025-10-08 16:01:04.397757 | orchestrator | Wednesday 08 October 2025 15:58:22 +0000 (0:00:02.472) 0:01:27.625 ***** 2025-10-08 16:01:04.397767 | orchestrator | skipping: [testbed-node-0] 2025-10-08 16:01:04.397776 | orchestrator | skipping: [testbed-node-2] 2025-10-08 16:01:04.397786 | orchestrator | skipping: [testbed-node-1] 2025-10-08 16:01:04.397796 | orchestrator | skipping: [testbed-node-3] 2025-10-08 16:01:04.397805 | orchestrator | skipping: [testbed-node-5] 2025-10-08 16:01:04.397815 | orchestrator | skipping: [testbed-node-4] 2025-10-08 16:01:04.397824 | orchestrator | 2025-10-08 16:01:04.397834 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2025-10-08 16:01:04.397844 | orchestrator | Wednesday 08 October 2025 15:58:24 +0000 (0:00:02.390) 0:01:30.016 ***** 2025-10-08 16:01:04.397854 | orchestrator | skipping: [testbed-node-3] 2025-10-08 16:01:04.397870 | orchestrator | skipping: [testbed-node-0] 2025-10-08 16:01:04.397880 | orchestrator | skipping: [testbed-node-1] 2025-10-08 16:01:04.397890 | orchestrator | skipping: [testbed-node-2] 2025-10-08 16:01:04.397900 | orchestrator | skipping: [testbed-node-4] 2025-10-08 16:01:04.397909 | orchestrator | skipping: [testbed-node-5] 2025-10-08 16:01:04.397919 | orchestrator | 2025-10-08 16:01:04.397929 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2025-10-08 16:01:04.397938 | orchestrator | Wednesday 08 October 2025 15:58:27 +0000 (0:00:02.889) 0:01:32.905 ***** 2025-10-08 16:01:04.397948 | orchestrator | skipping: [testbed-node-0] 2025-10-08 16:01:04.397958 | orchestrator | skipping: [testbed-node-1] 2025-10-08 16:01:04.397967 | orchestrator | skipping: [testbed-node-2] 2025-10-08 16:01:04.397977 | orchestrator | skipping: [testbed-node-3] 2025-10-08 16:01:04.397986 | orchestrator | skipping: [testbed-node-5] 2025-10-08 16:01:04.397996 | orchestrator | skipping: [testbed-node-4] 2025-10-08 16:01:04.398006 | orchestrator | 2025-10-08 16:01:04.398061 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2025-10-08 16:01:04.398072 | orchestrator | Wednesday 08 October 2025 15:58:29 +0000 (0:00:02.181) 0:01:35.087 ***** 2025-10-08 16:01:04.398082 | orchestrator | skipping: [testbed-node-0] 2025-10-08 16:01:04.398092 | orchestrator | skipping: [testbed-node-3] 2025-10-08 16:01:04.398101 | orchestrator | skipping: [testbed-node-1] 2025-10-08 16:01:04.398111 | orchestrator | skipping: [testbed-node-4] 2025-10-08 16:01:04.398127 | orchestrator | skipping: [testbed-node-2] 2025-10-08 16:01:04.398179 | orchestrator | skipping: [testbed-node-5] 2025-10-08 16:01:04.398198 | orchestrator | 2025-10-08 16:01:04.398208 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2025-10-08 16:01:04.398218 | orchestrator | Wednesday 08 October 2025 15:58:32 +0000 (0:00:02.299) 0:01:37.387 ***** 2025-10-08 16:01:04.398228 | orchestrator | skipping: [testbed-node-1] 2025-10-08 16:01:04.398237 | orchestrator | skipping: [testbed-node-0] 2025-10-08 16:01:04.398247 | orchestrator | skipping: [testbed-node-2] 2025-10-08 16:01:04.398256 | orchestrator | skipping: [testbed-node-3] 2025-10-08 16:01:04.398266 | orchestrator | skipping: [testbed-node-5] 2025-10-08 16:01:04.398275 | orchestrator | skipping: [testbed-node-4] 2025-10-08 16:01:04.398285 | orchestrator | 2025-10-08 16:01:04.398295 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2025-10-08 16:01:04.398304 | orchestrator | Wednesday 08 October 2025 15:58:33 +0000 (0:00:01.802) 0:01:39.189 ***** 2025-10-08 16:01:04.398314 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-10-08 16:01:04.398324 | orchestrator | skipping: [testbed-node-2] 2025-10-08 16:01:04.398334 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-10-08 16:01:04.398343 | orchestrator | skipping: [testbed-node-1] 2025-10-08 16:01:04.398353 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-10-08 16:01:04.398363 | orchestrator | skipping: [testbed-node-0] 2025-10-08 16:01:04.398373 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-10-08 16:01:04.398382 | orchestrator | skipping: [testbed-node-5] 2025-10-08 16:01:04.398392 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-10-08 16:01:04.398402 | orchestrator | skipping: [testbed-node-3] 2025-10-08 16:01:04.398411 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-10-08 16:01:04.398421 | orchestrator | skipping: [testbed-node-4] 2025-10-08 16:01:04.398431 | orchestrator | 2025-10-08 16:01:04.398440 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2025-10-08 16:01:04.398450 | orchestrator | Wednesday 08 October 2025 15:58:35 +0000 (0:00:01.884) 0:01:41.073 ***** 2025-10-08 16:01:04.398461 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-10-08 16:01:04.398471 | orchestrator | skipping: [testbed-node-0] 2025-10-08 16:01:04.398486 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-10-08 16:01:04.398502 | orchestrator | skipping: [testbed-node-1] 2025-10-08 16:01:04.398519 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-10-08 16:01:04.398529 | orchestrator | skipping: [testbed-node-2] 2025-10-08 16:01:04.398539 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-10-08 16:01:04.398549 | orchestrator | skipping: [testbed-node-3] 2025-10-08 16:01:04.398559 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-10-08 16:01:04.398569 | orchestrator | skipping: [testbed-node-4] 2025-10-08 16:01:04.398579 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-10-08 16:01:04.398589 | orchestrator | skipping: [testbed-node-5] 2025-10-08 16:01:04.398599 | orchestrator | 2025-10-08 16:01:04.398608 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2025-10-08 16:01:04.398618 | orchestrator | Wednesday 08 October 2025 15:58:38 +0000 (0:00:03.000) 0:01:44.073 ***** 2025-10-08 16:01:04.398632 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-10-08 16:01:04.398649 | orchestrator | skipping: [testbed-node-2] 2025-10-08 16:01:04.398665 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-10-08 16:01:04.398676 | orchestrator | skipping: [testbed-node-0] 2025-10-08 16:01:04.398686 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-10-08 16:01:04.398696 | orchestrator | skipping: [testbed-node-1] 2025-10-08 16:01:04.398705 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-10-08 16:01:04.398713 | orchestrator | skipping: [testbed-node-4] 2025-10-08 16:01:04.398721 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-10-08 16:01:04.398734 | orchestrator | skipping: [testbed-node-3] 2025-10-08 16:01:04.398746 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-10-08 16:01:04.398754 | orchestrator | skipping: [testbed-node-5] 2025-10-08 16:01:04.398762 | orchestrator | 2025-10-08 16:01:04.398770 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2025-10-08 16:01:04.398778 | orchestrator | Wednesday 08 October 2025 15:58:41 +0000 (0:00:02.252) 0:01:46.326 ***** 2025-10-08 16:01:04.398786 | orchestrator | skipping: [testbed-node-2] 2025-10-08 16:01:04.398798 | orchestrator | skipping: [testbed-node-0] 2025-10-08 16:01:04.398806 | orchestrator | skipping: [testbed-node-1] 2025-10-08 16:01:04.398814 | orchestrator | skipping: [testbed-node-4] 2025-10-08 16:01:04.398822 | orchestrator | skipping: [testbed-node-3] 2025-10-08 16:01:04.398830 | orchestrator | skipping: [testbed-node-5] 2025-10-08 16:01:04.398838 | orchestrator | 2025-10-08 16:01:04.398846 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2025-10-08 16:01:04.398854 | orchestrator | Wednesday 08 October 2025 15:58:43 +0000 (0:00:02.308) 0:01:48.634 ***** 2025-10-08 16:01:04.398862 | orchestrator | skipping: [testbed-node-0] 2025-10-08 16:01:04.398870 | orchestrator | skipping: [testbed-node-2] 2025-10-08 16:01:04.398878 | orchestrator | skipping: [testbed-node-1] 2025-10-08 16:01:04.398885 | orchestrator | changed: [testbed-node-5] 2025-10-08 16:01:04.398893 | orchestrator | changed: [testbed-node-4] 2025-10-08 16:01:04.398901 | orchestrator | changed: [testbed-node-3] 2025-10-08 16:01:04.398909 | orchestrator | 2025-10-08 16:01:04.398917 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2025-10-08 16:01:04.398925 | orchestrator | Wednesday 08 October 2025 15:58:47 +0000 (0:00:03.700) 0:01:52.335 ***** 2025-10-08 16:01:04.398933 | orchestrator | skipping: [testbed-node-0] 2025-10-08 16:01:04.398941 | orchestrator | skipping: [testbed-node-1] 2025-10-08 16:01:04.398949 | orchestrator | skipping: [testbed-node-2] 2025-10-08 16:01:04.398957 | orchestrator | skipping: [testbed-node-4] 2025-10-08 16:01:04.398965 | orchestrator | skipping: [testbed-node-3] 2025-10-08 16:01:04.398972 | orchestrator | skipping: [testbed-node-5] 2025-10-08 16:01:04.398980 | orchestrator | 2025-10-08 16:01:04.398988 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2025-10-08 16:01:04.398996 | orchestrator | Wednesday 08 October 2025 15:58:50 +0000 (0:00:03.136) 0:01:55.471 ***** 2025-10-08 16:01:04.399004 | orchestrator | skipping: [testbed-node-2] 2025-10-08 16:01:04.399012 | orchestrator | skipping: [testbed-node-4] 2025-10-08 16:01:04.399020 | orchestrator | skipping: [testbed-node-0] 2025-10-08 16:01:04.399028 | orchestrator | skipping: [testbed-node-1] 2025-10-08 16:01:04.399036 | orchestrator | skipping: [testbed-node-3] 2025-10-08 16:01:04.399044 | orchestrator | skipping: [testbed-node-5] 2025-10-08 16:01:04.399051 | orchestrator | 2025-10-08 16:01:04.399059 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2025-10-08 16:01:04.399067 | orchestrator | Wednesday 08 October 2025 15:58:53 +0000 (0:00:03.329) 0:01:58.801 ***** 2025-10-08 16:01:04.399084 | orchestrator | skipping: [testbed-node-1] 2025-10-08 16:01:04.399092 | orchestrator | skipping: [testbed-node-0] 2025-10-08 16:01:04.399100 | orchestrator | skipping: [testbed-node-2] 2025-10-08 16:01:04.399107 | orchestrator | skipping: [testbed-node-5] 2025-10-08 16:01:04.399115 | orchestrator | skipping: [testbed-node-3] 2025-10-08 16:01:04.399123 | orchestrator | skipping: [testbed-node-4] 2025-10-08 16:01:04.399131 | orchestrator | 2025-10-08 16:01:04.399152 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2025-10-08 16:01:04.399161 | orchestrator | Wednesday 08 October 2025 15:58:55 +0000 (0:00:02.334) 0:02:01.136 ***** 2025-10-08 16:01:04.399168 | orchestrator | skipping: [testbed-node-0] 2025-10-08 16:01:04.399177 | orchestrator | skipping: [testbed-node-1] 2025-10-08 16:01:04.399184 | orchestrator | skipping: [testbed-node-2] 2025-10-08 16:01:04.399192 | orchestrator | skipping: [testbed-node-3] 2025-10-08 16:01:04.399200 | orchestrator | skipping: [testbed-node-4] 2025-10-08 16:01:04.399208 | orchestrator | skipping: [testbed-node-5] 2025-10-08 16:01:04.399216 | orchestrator | 2025-10-08 16:01:04.399224 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2025-10-08 16:01:04.399232 | orchestrator | Wednesday 08 October 2025 15:58:57 +0000 (0:00:02.046) 0:02:03.183 ***** 2025-10-08 16:01:04.399240 | orchestrator | skipping: [testbed-node-0] 2025-10-08 16:01:04.399247 | orchestrator | skipping: [testbed-node-3] 2025-10-08 16:01:04.399255 | orchestrator | skipping: [testbed-node-1] 2025-10-08 16:01:04.399263 | orchestrator | skipping: [testbed-node-2] 2025-10-08 16:01:04.399271 | orchestrator | skipping: [testbed-node-4] 2025-10-08 16:01:04.399279 | orchestrator | skipping: [testbed-node-5] 2025-10-08 16:01:04.399287 | orchestrator | 2025-10-08 16:01:04.399295 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2025-10-08 16:01:04.399303 | orchestrator | Wednesday 08 October 2025 15:58:59 +0000 (0:00:01.994) 0:02:05.178 ***** 2025-10-08 16:01:04.399310 | orchestrator | skipping: [testbed-node-1] 2025-10-08 16:01:04.399318 | orchestrator | skipping: [testbed-node-0] 2025-10-08 16:01:04.399326 | orchestrator | skipping: [testbed-node-2] 2025-10-08 16:01:04.399334 | orchestrator | skipping: [testbed-node-3] 2025-10-08 16:01:04.399342 | orchestrator | skipping: [testbed-node-5] 2025-10-08 16:01:04.399350 | orchestrator | skipping: [testbed-node-4] 2025-10-08 16:01:04.399358 | orchestrator | 2025-10-08 16:01:04.399366 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2025-10-08 16:01:04.399373 | orchestrator | Wednesday 08 October 2025 15:59:03 +0000 (0:00:03.205) 0:02:08.384 ***** 2025-10-08 16:01:04.399381 | orchestrator | skipping: [testbed-node-0] 2025-10-08 16:01:04.399389 | orchestrator | skipping: [testbed-node-1] 2025-10-08 16:01:04.399397 | orchestrator | skipping: [testbed-node-2] 2025-10-08 16:01:04.399405 | orchestrator | skipping: [testbed-node-3] 2025-10-08 16:01:04.399416 | orchestrator | skipping: [testbed-node-4] 2025-10-08 16:01:04.399424 | orchestrator | skipping: [testbed-node-5] 2025-10-08 16:01:04.399432 | orchestrator | 2025-10-08 16:01:04.399440 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2025-10-08 16:01:04.399448 | orchestrator | Wednesday 08 October 2025 15:59:06 +0000 (0:00:03.099) 0:02:11.483 ***** 2025-10-08 16:01:04.399456 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-10-08 16:01:04.399465 | orchestrator | skipping: [testbed-node-0] 2025-10-08 16:01:04.399473 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-10-08 16:01:04.399481 | orchestrator | skipping: [testbed-node-3] 2025-10-08 16:01:04.399489 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-10-08 16:01:04.399496 | orchestrator | skipping: [testbed-node-4] 2025-10-08 16:01:04.399504 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-10-08 16:01:04.399512 | orchestrator | skipping: [testbed-node-1] 2025-10-08 16:01:04.399530 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-10-08 16:01:04.399538 | orchestrator | skipping: [testbed-node-5] 2025-10-08 16:01:04.399546 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-10-08 16:01:04.399554 | orchestrator | skipping: [testbed-node-2] 2025-10-08 16:01:04.399562 | orchestrator | 2025-10-08 16:01:04.399570 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2025-10-08 16:01:04.399578 | orchestrator | Wednesday 08 October 2025 15:59:09 +0000 (0:00:03.267) 0:02:14.750 ***** 2025-10-08 16:01:04.399586 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-10-08 16:01:04.399594 | orchestrator | skipping: [testbed-node-2] 2025-10-08 16:01:04.399603 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-10-08 16:01:04.399611 | orchestrator | skipping: [testbed-node-0] 2025-10-08 16:01:04.399619 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-10-08 16:01:04.399627 | orchestrator | skipping: [testbed-node-1] 2025-10-08 16:01:04.399639 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-10-08 16:01:04.399658 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-10-08 16:01:04.399667 | orchestrator | skipping: [testbed-node-4] 2025-10-08 16:01:04.399675 | orchestrator | skipping: [testbed-node-3] 2025-10-08 16:01:04.399683 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-10-08 16:01:04.399691 | orchestrator | skipping: [testbed-node-5] 2025-10-08 16:01:04.399699 | orchestrator | 2025-10-08 16:01:04.399707 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2025-10-08 16:01:04.399715 | orchestrator | Wednesday 08 October 2025 15:59:12 +0000 (0:00:02.704) 0:02:17.455 ***** 2025-10-08 16:01:04.399723 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-10-08 16:01:04.399735 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-10-08 16:01:04.399748 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-10-08 16:01:04.399761 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-10-08 16:01:04.399770 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-10-08 16:01:04.399778 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-10-08 16:01:04.399786 | orchestrator | 2025-10-08 16:01:04.399794 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-10-08 16:01:04.399802 | orchestrator | Wednesday 08 October 2025 15:59:15 +0000 (0:00:03.620) 0:02:21.075 ***** 2025-10-08 16:01:04.399810 | orchestrator | skipping: [testbed-node-0] 2025-10-08 16:01:04.399818 | orchestrator | skipping: [testbed-node-1] 2025-10-08 16:01:04.399826 | orchestrator | skipping: [testbed-node-2] 2025-10-08 16:01:04.399834 | orchestrator | skipping: [testbed-node-3] 2025-10-08 16:01:04.399841 | orchestrator | skipping: [testbed-node-4] 2025-10-08 16:01:04.399849 | orchestrator | skipping: [testbed-node-5] 2025-10-08 16:01:04.399857 | orchestrator | 2025-10-08 16:01:04.399865 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2025-10-08 16:01:04.399880 | orchestrator | Wednesday 08 October 2025 15:59:16 +0000 (0:00:00.844) 0:02:21.919 ***** 2025-10-08 16:01:04.399888 | orchestrator | changed: [testbed-node-0] 2025-10-08 16:01:04.399896 | orchestrator | 2025-10-08 16:01:04.399904 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2025-10-08 16:01:04.399912 | orchestrator | Wednesday 08 October 2025 15:59:18 +0000 (0:00:02.236) 0:02:24.156 ***** 2025-10-08 16:01:04.399920 | orchestrator | changed: [testbed-node-0] 2025-10-08 16:01:04.399928 | orchestrator | 2025-10-08 16:01:04.399939 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2025-10-08 16:01:04.399947 | orchestrator | Wednesday 08 October 2025 15:59:21 +0000 (0:00:02.343) 0:02:26.499 ***** 2025-10-08 16:01:04.399955 | orchestrator | changed: [testbed-node-0] 2025-10-08 16:01:04.399963 | orchestrator | 2025-10-08 16:01:04.399971 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-10-08 16:01:04.399979 | orchestrator | Wednesday 08 October 2025 16:00:09 +0000 (0:00:48.095) 0:03:14.594 ***** 2025-10-08 16:01:04.399987 | orchestrator | 2025-10-08 16:01:04.399994 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-10-08 16:01:04.400002 | orchestrator | Wednesday 08 October 2025 16:00:09 +0000 (0:00:00.242) 0:03:14.837 ***** 2025-10-08 16:01:04.400010 | orchestrator | 2025-10-08 16:01:04.400018 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-10-08 16:01:04.400026 | orchestrator | Wednesday 08 October 2025 16:00:10 +0000 (0:00:00.670) 0:03:15.508 ***** 2025-10-08 16:01:04.400034 | orchestrator | 2025-10-08 16:01:04.400041 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-10-08 16:01:04.400049 | orchestrator | Wednesday 08 October 2025 16:00:10 +0000 (0:00:00.067) 0:03:15.576 ***** 2025-10-08 16:01:04.400057 | orchestrator | 2025-10-08 16:01:04.400069 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-10-08 16:01:04.400077 | orchestrator | Wednesday 08 October 2025 16:00:10 +0000 (0:00:00.087) 0:03:15.664 ***** 2025-10-08 16:01:04.400085 | orchestrator | 2025-10-08 16:01:04.400093 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-10-08 16:01:04.400100 | orchestrator | Wednesday 08 October 2025 16:00:10 +0000 (0:00:00.204) 0:03:15.869 ***** 2025-10-08 16:01:04.400108 | orchestrator | 2025-10-08 16:01:04.400116 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2025-10-08 16:01:04.400124 | orchestrator | Wednesday 08 October 2025 16:00:10 +0000 (0:00:00.213) 0:03:16.082 ***** 2025-10-08 16:01:04.400132 | orchestrator | changed: [testbed-node-0] 2025-10-08 16:01:04.400153 | orchestrator | changed: [testbed-node-2] 2025-10-08 16:01:04.400161 | orchestrator | changed: [testbed-node-1] 2025-10-08 16:01:04.400169 | orchestrator | 2025-10-08 16:01:04.400177 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2025-10-08 16:01:04.400185 | orchestrator | Wednesday 08 October 2025 16:00:37 +0000 (0:00:26.838) 0:03:42.920 ***** 2025-10-08 16:01:04.400193 | orchestrator | changed: [testbed-node-3] 2025-10-08 16:01:04.400201 | orchestrator | changed: [testbed-node-4] 2025-10-08 16:01:04.400209 | orchestrator | changed: [testbed-node-5] 2025-10-08 16:01:04.400217 | orchestrator | 2025-10-08 16:01:04.400225 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-08 16:01:04.400233 | orchestrator | testbed-node-0 : ok=26  changed=15  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-10-08 16:01:04.400241 | orchestrator | testbed-node-1 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-10-08 16:01:04.400249 | orchestrator | testbed-node-2 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-10-08 16:01:04.400257 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-10-08 16:01:04.400270 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-10-08 16:01:04.400278 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-10-08 16:01:04.400286 | orchestrator | 2025-10-08 16:01:04.400294 | orchestrator | 2025-10-08 16:01:04.400302 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-08 16:01:04.400310 | orchestrator | Wednesday 08 October 2025 16:01:03 +0000 (0:00:25.784) 0:04:08.704 ***** 2025-10-08 16:01:04.400317 | orchestrator | =============================================================================== 2025-10-08 16:01:04.400325 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 48.10s 2025-10-08 16:01:04.400333 | orchestrator | neutron : Restart neutron-server container ----------------------------- 26.84s 2025-10-08 16:01:04.400341 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 25.78s 2025-10-08 16:01:04.400361 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 8.22s 2025-10-08 16:01:04.400369 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 7.06s 2025-10-08 16:01:04.400377 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 6.58s 2025-10-08 16:01:04.400392 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 4.54s 2025-10-08 16:01:04.400400 | orchestrator | neutron : Copying over config.json files for services ------------------- 4.47s 2025-10-08 16:01:04.400408 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 4.13s 2025-10-08 16:01:04.400416 | orchestrator | neutron : Copying over existing policy file ----------------------------- 3.77s 2025-10-08 16:01:04.400424 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 3.70s 2025-10-08 16:01:04.400431 | orchestrator | neutron : Check neutron containers -------------------------------------- 3.62s 2025-10-08 16:01:04.400439 | orchestrator | service-ks-register : neutron | Creating roles -------------------------- 3.57s 2025-10-08 16:01:04.400451 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 3.55s 2025-10-08 16:01:04.400459 | orchestrator | neutron : Copying over ssh key ------------------------------------------ 3.54s 2025-10-08 16:01:04.400467 | orchestrator | service-ks-register : neutron | Creating services ----------------------- 3.51s 2025-10-08 16:01:04.400475 | orchestrator | neutron : Copying over ironic_neutron_agent.ini ------------------------- 3.33s 2025-10-08 16:01:04.400483 | orchestrator | service-ks-register : neutron | Creating projects ----------------------- 3.31s 2025-10-08 16:01:04.400491 | orchestrator | neutron : Copying over neutron-tls-proxy.cfg ---------------------------- 3.27s 2025-10-08 16:01:04.400499 | orchestrator | neutron : Copy neutron-l3-agent-wrapper script -------------------------- 3.21s 2025-10-08 16:01:04.400506 | orchestrator | 2025-10-08 16:01:04 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:01:07.442548 | orchestrator | 2025-10-08 16:01:07 | INFO  | Task fc80fd5e-d164-4975-92f7-1cf324d89ea3 is in state STARTED 2025-10-08 16:01:07.444216 | orchestrator | 2025-10-08 16:01:07 | INFO  | Task c9acb2ec-d69c-41f2-b09a-cfb2bdbcdae2 is in state STARTED 2025-10-08 16:01:07.445770 | orchestrator | 2025-10-08 16:01:07 | INFO  | Task c824450d-6fa3-4389-b285-cf9b5f049536 is in state STARTED 2025-10-08 16:01:07.447812 | orchestrator | 2025-10-08 16:01:07 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 16:01:07.447840 | orchestrator | 2025-10-08 16:01:07 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:01:10.501722 | orchestrator | 2025-10-08 16:01:10 | INFO  | Task fc80fd5e-d164-4975-92f7-1cf324d89ea3 is in state STARTED 2025-10-08 16:01:10.503196 | orchestrator | 2025-10-08 16:01:10 | INFO  | Task c9acb2ec-d69c-41f2-b09a-cfb2bdbcdae2 is in state STARTED 2025-10-08 16:01:10.504205 | orchestrator | 2025-10-08 16:01:10 | INFO  | Task c824450d-6fa3-4389-b285-cf9b5f049536 is in state STARTED 2025-10-08 16:01:10.505404 | orchestrator | 2025-10-08 16:01:10 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 16:01:10.505429 | orchestrator | 2025-10-08 16:01:10 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:01:13.553175 | orchestrator | 2025-10-08 16:01:13 | INFO  | Task fc80fd5e-d164-4975-92f7-1cf324d89ea3 is in state STARTED 2025-10-08 16:01:13.554514 | orchestrator | 2025-10-08 16:01:13 | INFO  | Task c9acb2ec-d69c-41f2-b09a-cfb2bdbcdae2 is in state STARTED 2025-10-08 16:01:13.556172 | orchestrator | 2025-10-08 16:01:13 | INFO  | Task c824450d-6fa3-4389-b285-cf9b5f049536 is in state STARTED 2025-10-08 16:01:13.557988 | orchestrator | 2025-10-08 16:01:13 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 16:01:13.558009 | orchestrator | 2025-10-08 16:01:13 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:01:16.607030 | orchestrator | 2025-10-08 16:01:16 | INFO  | Task fc80fd5e-d164-4975-92f7-1cf324d89ea3 is in state STARTED 2025-10-08 16:01:16.608457 | orchestrator | 2025-10-08 16:01:16 | INFO  | Task c9acb2ec-d69c-41f2-b09a-cfb2bdbcdae2 is in state STARTED 2025-10-08 16:01:16.609475 | orchestrator | 2025-10-08 16:01:16 | INFO  | Task c824450d-6fa3-4389-b285-cf9b5f049536 is in state STARTED 2025-10-08 16:01:16.610937 | orchestrator | 2025-10-08 16:01:16 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 16:01:16.610966 | orchestrator | 2025-10-08 16:01:16 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:01:19.648383 | orchestrator | 2025-10-08 16:01:19 | INFO  | Task fc80fd5e-d164-4975-92f7-1cf324d89ea3 is in state STARTED 2025-10-08 16:01:19.650984 | orchestrator | 2025-10-08 16:01:19 | INFO  | Task c9acb2ec-d69c-41f2-b09a-cfb2bdbcdae2 is in state STARTED 2025-10-08 16:01:19.651020 | orchestrator | 2025-10-08 16:01:19 | INFO  | Task c824450d-6fa3-4389-b285-cf9b5f049536 is in state STARTED 2025-10-08 16:01:19.651581 | orchestrator | 2025-10-08 16:01:19 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 16:01:19.651604 | orchestrator | 2025-10-08 16:01:19 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:01:22.689533 | orchestrator | 2025-10-08 16:01:22 | INFO  | Task fc80fd5e-d164-4975-92f7-1cf324d89ea3 is in state SUCCESS 2025-10-08 16:01:22.691396 | orchestrator | 2025-10-08 16:01:22.691463 | orchestrator | 2025-10-08 16:01:22.691473 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-10-08 16:01:22.691481 | orchestrator | 2025-10-08 16:01:22.691488 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-10-08 16:01:22.691495 | orchestrator | Wednesday 08 October 2025 15:58:13 +0000 (0:00:00.880) 0:00:00.880 ***** 2025-10-08 16:01:22.691502 | orchestrator | ok: [testbed-node-0] 2025-10-08 16:01:22.691510 | orchestrator | ok: [testbed-node-1] 2025-10-08 16:01:22.691516 | orchestrator | ok: [testbed-node-2] 2025-10-08 16:01:22.691523 | orchestrator | 2025-10-08 16:01:22.691529 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-10-08 16:01:22.691548 | orchestrator | Wednesday 08 October 2025 15:58:14 +0000 (0:00:00.767) 0:00:01.648 ***** 2025-10-08 16:01:22.691555 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2025-10-08 16:01:22.691562 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2025-10-08 16:01:22.691568 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2025-10-08 16:01:22.691575 | orchestrator | 2025-10-08 16:01:22.691581 | orchestrator | PLAY [Apply role designate] **************************************************** 2025-10-08 16:01:22.691587 | orchestrator | 2025-10-08 16:01:22.691594 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-10-08 16:01:22.691616 | orchestrator | Wednesday 08 October 2025 15:58:14 +0000 (0:00:00.415) 0:00:02.063 ***** 2025-10-08 16:01:22.691623 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-08 16:01:22.691631 | orchestrator | 2025-10-08 16:01:22.691637 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2025-10-08 16:01:22.691644 | orchestrator | Wednesday 08 October 2025 15:58:15 +0000 (0:00:00.494) 0:00:02.557 ***** 2025-10-08 16:01:22.691650 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2025-10-08 16:01:22.691656 | orchestrator | 2025-10-08 16:01:22.691663 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2025-10-08 16:01:22.691669 | orchestrator | Wednesday 08 October 2025 15:58:18 +0000 (0:00:03.495) 0:00:06.053 ***** 2025-10-08 16:01:22.691675 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2025-10-08 16:01:22.691682 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2025-10-08 16:01:22.691688 | orchestrator | 2025-10-08 16:01:22.691695 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2025-10-08 16:01:22.691701 | orchestrator | Wednesday 08 October 2025 15:58:25 +0000 (0:00:06.757) 0:00:12.811 ***** 2025-10-08 16:01:22.691707 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-10-08 16:01:22.691714 | orchestrator | 2025-10-08 16:01:22.691720 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2025-10-08 16:01:22.691726 | orchestrator | Wednesday 08 October 2025 15:58:28 +0000 (0:00:03.608) 0:00:16.419 ***** 2025-10-08 16:01:22.691732 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-10-08 16:01:22.691739 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2025-10-08 16:01:22.691745 | orchestrator | 2025-10-08 16:01:22.691751 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2025-10-08 16:01:22.691757 | orchestrator | Wednesday 08 October 2025 15:58:32 +0000 (0:00:04.037) 0:00:20.457 ***** 2025-10-08 16:01:22.691764 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-10-08 16:01:22.691770 | orchestrator | 2025-10-08 16:01:22.691776 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2025-10-08 16:01:22.691783 | orchestrator | Wednesday 08 October 2025 15:58:36 +0000 (0:00:03.958) 0:00:24.415 ***** 2025-10-08 16:01:22.691789 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2025-10-08 16:01:22.691795 | orchestrator | 2025-10-08 16:01:22.691801 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2025-10-08 16:01:22.691808 | orchestrator | Wednesday 08 October 2025 15:58:41 +0000 (0:00:04.676) 0:00:29.092 ***** 2025-10-08 16:01:22.691817 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-10-08 16:01:22.691840 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-10-08 16:01:22.691856 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-10-08 16:01:22.691864 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-10-08 16:01:22.691872 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-10-08 16:01:22.691879 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-10-08 16:01:22.691886 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-10-08 16:01:22.691902 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-10-08 16:01:22.692204 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-10-08 16:01:22.692219 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-10-08 16:01:22.692229 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-10-08 16:01:22.692236 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-10-08 16:01:22.692244 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-10-08 16:01:22.692252 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-10-08 16:01:22.692273 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-10-08 16:01:22.692284 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-10-08 16:01:22.692292 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-10-08 16:01:22.692300 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-10-08 16:01:22.692308 | orchestrator | 2025-10-08 16:01:22.692315 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2025-10-08 16:01:22.692323 | orchestrator | Wednesday 08 October 2025 15:58:45 +0000 (0:00:03.643) 0:00:32.735 ***** 2025-10-08 16:01:22.692329 | orchestrator | skipping: [testbed-node-0] 2025-10-08 16:01:22.692336 | orchestrator | 2025-10-08 16:01:22.692342 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2025-10-08 16:01:22.692348 | orchestrator | Wednesday 08 October 2025 15:58:45 +0000 (0:00:00.221) 0:00:32.957 ***** 2025-10-08 16:01:22.692355 | orchestrator | skipping: [testbed-node-0] 2025-10-08 16:01:22.692361 | orchestrator | skipping: [testbed-node-1] 2025-10-08 16:01:22.692367 | orchestrator | skipping: [testbed-node-2] 2025-10-08 16:01:22.692373 | orchestrator | 2025-10-08 16:01:22.692380 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-10-08 16:01:22.692386 | orchestrator | Wednesday 08 October 2025 15:58:45 +0000 (0:00:00.286) 0:00:33.244 ***** 2025-10-08 16:01:22.692392 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-08 16:01:22.692398 | orchestrator | 2025-10-08 16:01:22.692405 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2025-10-08 16:01:22.692411 | orchestrator | Wednesday 08 October 2025 15:58:46 +0000 (0:00:00.666) 0:00:33.911 ***** 2025-10-08 16:01:22.692422 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-10-08 16:01:22.692437 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-10-08 16:01:22.692444 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-10-08 16:01:22.692451 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-10-08 16:01:22.692457 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-10-08 16:01:22.692468 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-10-08 16:01:22.692474 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-10-08 16:01:22.692488 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-10-08 16:01:22.692495 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-10-08 16:01:22.692501 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-10-08 16:01:22.692508 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-10-08 16:01:22.692514 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-10-08 16:01:22.692525 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-10-08 16:01:22.692535 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-10-08 16:01:22.692546 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-10-08 16:01:22.692552 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-10-08 16:01:22.692559 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-10-08 16:01:22.692565 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-10-08 16:01:22.692572 | orchestrator | 2025-10-08 16:01:22.692579 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2025-10-08 16:01:22.692593 | orchestrator | Wednesday 08 October 2025 15:58:52 +0000 (0:00:06.191) 0:00:40.103 ***** 2025-10-08 16:01:22.692599 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-10-08 16:01:22.692606 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-10-08 16:01:22.692620 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-10-08 16:01:22.692627 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-10-08 16:01:22.692634 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-10-08 16:01:22.692641 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-10-08 16:01:22.692651 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-10-08 16:01:22.692658 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-10-08 16:01:22.692670 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-10-08 16:01:22.692677 | orchestrator | skipping: [testbed-node-0] 2025-10-08 16:01:22.692686 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-10-08 16:01:22.692693 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-10-08 16:01:22.692700 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-10-08 16:01:22.692711 | orchestrator | skipping: [testbed-node-1] 2025-10-08 16:01:22.692717 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-10-08 16:01:22.692724 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-10-08 16:01:22.692736 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-10-08 16:01:22.692746 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-10-08 16:01:22.693108 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-10-08 16:01:22.693121 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-10-08 16:01:22.693190 | orchestrator | skipping: [testbed-node-2] 2025-10-08 16:01:22.693201 | orchestrator | 2025-10-08 16:01:22.693207 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2025-10-08 16:01:22.693214 | orchestrator | Wednesday 08 October 2025 15:58:53 +0000 (0:00:01.016) 0:00:41.120 ***** 2025-10-08 16:01:22.693221 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-10-08 16:01:22.693229 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-10-08 16:01:22.693255 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-10-08 16:01:22.693268 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-10-08 16:01:22.693275 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-10-08 16:01:22.693282 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-10-08 16:01:22.693294 | orchestrator | skipping: [testbed-node-0] 2025-10-08 16:01:22.693301 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-10-08 16:01:22.693308 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-10-08 16:01:22.693343 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-10-08 16:01:22.693354 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-10-08 16:01:22.693401 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-10-08 16:01:22.693411 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-10-08 16:01:22.693423 | orchestrator | skipping: [testbed-node-1] 2025-10-08 16:01:22.693430 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-10-08 16:01:22.693437 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-10-08 16:01:22.693444 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-10-08 16:01:22.693470 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-10-08 16:01:22.693481 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-10-08 16:01:22.693488 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-10-08 16:01:22.693501 | orchestrator | skipping: [testbed-node-2] 2025-10-08 16:01:22.693507 | orchestrator | 2025-10-08 16:01:22.693514 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2025-10-08 16:01:22.693521 | orchestrator | Wednesday 08 October 2025 15:58:55 +0000 (0:00:01.615) 0:00:42.735 ***** 2025-10-08 16:01:22.693528 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-10-08 16:01:22.693535 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-10-08 16:01:22.693559 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-10-08 16:01:22.693570 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-10-08 16:01:22.693582 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-10-08 16:01:22.693590 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-10-08 16:01:22.693597 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-10-08 16:01:22.693603 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-10-08 16:01:22.693626 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-10-08 16:01:22.693640 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-10-08 16:01:22.693648 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-10-08 16:01:22.693659 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-10-08 16:01:22.693667 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-10-08 16:01:22.693674 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-10-08 16:01:22.693681 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-10-08 16:01:22.693705 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-10-08 16:01:22.693716 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-10-08 16:01:22.693727 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-10-08 16:01:22.693734 | orchestrator | 2025-10-08 16:01:22.693741 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2025-10-08 16:01:22.693748 | orchestrator | Wednesday 08 October 2025 15:59:01 +0000 (0:00:06.322) 0:00:49.058 ***** 2025-10-08 16:01:22.693755 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-10-08 16:01:22.693762 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-10-08 16:01:22.693769 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-10-08 16:01:22.693782 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-10-08 16:01:22.693793 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-10-08 16:01:22.693800 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-10-08 16:01:22.693807 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-10-08 16:01:22.693814 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-10-08 16:01:22.693821 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-10-08 16:01:22.693833 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-10-08 16:01:22.693847 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-10-08 16:01:22.693854 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-10-08 16:01:22.693861 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-10-08 16:01:22.693868 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-10-08 16:01:22.693875 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-10-08 16:01:22.693882 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-10-08 16:01:22.693894 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-10-08 16:01:22.693907 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-10-08 16:01:22.693914 | orchestrator | 2025-10-08 16:01:22.693921 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2025-10-08 16:01:22.693928 | orchestrator | Wednesday 08 October 2025 15:59:21 +0000 (0:00:19.546) 0:01:08.604 ***** 2025-10-08 16:01:22.693935 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-10-08 16:01:22.693941 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-10-08 16:01:22.693948 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-10-08 16:01:22.693954 | orchestrator | 2025-10-08 16:01:22.693960 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2025-10-08 16:01:22.693967 | orchestrator | Wednesday 08 October 2025 15:59:27 +0000 (0:00:05.971) 0:01:14.576 ***** 2025-10-08 16:01:22.693973 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-10-08 16:01:22.693980 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-10-08 16:01:22.693986 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-10-08 16:01:22.693993 | orchestrator | 2025-10-08 16:01:22.693999 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2025-10-08 16:01:22.694005 | orchestrator | Wednesday 08 October 2025 15:59:29 +0000 (0:00:02.575) 0:01:17.152 ***** 2025-10-08 16:01:22.694012 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-10-08 16:01:22.694055 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-10-08 16:01:22.694073 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-10-08 16:01:22.694085 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-10-08 16:01:22.694093 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-10-08 16:01:22.694101 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-10-08 16:01:22.694108 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-10-08 16:01:22.694116 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-10-08 16:01:22.694127 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-10-08 16:01:22.694177 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-10-08 16:01:22.694189 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-10-08 16:01:22.694197 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-10-08 16:01:22.694205 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-10-08 16:01:22.694212 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-10-08 16:01:22.694220 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-10-08 16:01:22.694232 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-10-08 16:01:22.694248 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-10-08 16:01:22.694256 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-10-08 16:01:22.694263 | orchestrator | 2025-10-08 16:01:22.694271 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2025-10-08 16:01:22.694278 | orchestrator | Wednesday 08 October 2025 15:59:32 +0000 (0:00:03.303) 0:01:20.455 ***** 2025-10-08 16:01:22.694285 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-10-08 16:01:22.694293 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-10-08 16:01:22.694304 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-10-08 16:01:22.694315 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-10-08 16:01:22.694327 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-10-08 16:01:22.694335 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-10-08 16:01:22.694342 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-10-08 16:01:22.694350 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-10-08 16:01:22.694361 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-10-08 16:01:22.694373 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-10-08 16:01:22.694384 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-10-08 16:01:22.694391 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-10-08 16:01:22.694399 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-10-08 16:01:22.694407 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-10-08 16:01:22.694413 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-10-08 16:01:22.694424 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-10-08 16:01:22.694434 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-10-08 16:01:22.694444 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-10-08 16:01:22.694451 | orchestrator | 2025-10-08 16:01:22.694457 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-10-08 16:01:22.694463 | orchestrator | Wednesday 08 October 2025 15:59:35 +0000 (0:00:02.578) 0:01:23.033 ***** 2025-10-08 16:01:22.694470 | orchestrator | skipping: [testbed-node-0] 2025-10-08 16:01:22.694476 | orchestrator | skipping: [testbed-node-1] 2025-10-08 16:01:22.694482 | orchestrator | skipping: [testbed-node-2] 2025-10-08 16:01:22.694489 | orchestrator | 2025-10-08 16:01:22.694495 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2025-10-08 16:01:22.694501 | orchestrator | Wednesday 08 October 2025 15:59:36 +0000 (0:00:00.559) 0:01:23.593 ***** 2025-10-08 16:01:22.694508 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-10-08 16:01:22.694514 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-10-08 16:01:22.694525 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-10-08 16:01:22.694532 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-10-08 16:01:22.694544 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-10-08 16:01:22.694554 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-10-08 16:01:22.694560 | orchestrator | skipping: [testbed-node-0] 2025-10-08 16:01:22.694567 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-10-08 16:01:22.694574 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-10-08 16:01:22.694584 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-10-08 16:01:22.694590 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-10-08 16:01:22.694601 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-10-08 16:01:22.694611 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-10-08 16:01:22.694618 | orchestrator | skipping: [testbed-node-1] 2025-10-08 16:01:22.694624 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-10-08 16:01:22.694631 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-10-08 16:01:22.694642 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-10-08 16:01:22.694648 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-10-08 16:01:22.694655 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-10-08 16:01:22.694668 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-10-08 16:01:22.694675 | orchestrator | skipping: [testbed-node-2] 2025-10-08 16:01:22.694681 | orchestrator | 2025-10-08 16:01:22.694688 | orchestrator | TASK [designate : Check designate containers] ********************************** 2025-10-08 16:01:22.694694 | orchestrator | Wednesday 08 October 2025 15:59:37 +0000 (0:00:01.169) 0:01:24.763 ***** 2025-10-08 16:01:22.694701 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-10-08 16:01:22.694711 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-10-08 16:01:22.694718 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-10-08 16:01:22.694725 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-10-08 16:01:22.694738 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-10-08 16:01:22.694745 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-10-08 16:01:22.694757 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-10-08 16:01:22.694763 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-10-08 16:01:22.694770 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-10-08 16:01:22.694777 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-10-08 16:01:22.694786 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-10-08 16:01:22.694796 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-10-08 16:01:22.694802 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-10-08 16:01:22.694813 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-10-08 16:01:22.694820 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-10-08 16:01:22.694826 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-10-08 16:01:22.694833 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-10-08 16:01:22.694842 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-10-08 16:01:22.694849 | orchestrator | 2025-10-08 16:01:22.694855 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-10-08 16:01:22.694862 | orchestrator | Wednesday 08 October 2025 15:59:41 +0000 (0:00:04.402) 0:01:29.165 ***** 2025-10-08 16:01:22.694868 | orchestrator | skipping: [testbed-node-0] 2025-10-08 16:01:22.694874 | orchestrator | skipping: [testbed-node-1] 2025-10-08 16:01:22.694880 | orchestrator | skipping: [testbed-node-2] 2025-10-08 16:01:22.694887 | orchestrator | 2025-10-08 16:01:22.694893 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2025-10-08 16:01:22.694902 | orchestrator | Wednesday 08 October 2025 15:59:42 +0000 (0:00:00.369) 0:01:29.534 ***** 2025-10-08 16:01:22.694909 | orchestrator | changed: [testbed-node-0] => (item=designate) 2025-10-08 16:01:22.694919 | orchestrator | 2025-10-08 16:01:22.694925 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2025-10-08 16:01:22.694931 | orchestrator | Wednesday 08 October 2025 15:59:44 +0000 (0:00:02.244) 0:01:31.778 ***** 2025-10-08 16:01:22.694938 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-10-08 16:01:22.694944 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2025-10-08 16:01:22.694950 | orchestrator | 2025-10-08 16:01:22.694957 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2025-10-08 16:01:22.694963 | orchestrator | Wednesday 08 October 2025 15:59:46 +0000 (0:00:02.635) 0:01:34.414 ***** 2025-10-08 16:01:22.694969 | orchestrator | changed: [testbed-node-0] 2025-10-08 16:01:22.694975 | orchestrator | 2025-10-08 16:01:22.694981 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-10-08 16:01:22.694987 | orchestrator | Wednesday 08 October 2025 16:00:06 +0000 (0:00:19.346) 0:01:53.760 ***** 2025-10-08 16:01:22.694994 | orchestrator | 2025-10-08 16:01:22.695000 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-10-08 16:01:22.695006 | orchestrator | Wednesday 08 October 2025 16:00:06 +0000 (0:00:00.279) 0:01:54.040 ***** 2025-10-08 16:01:22.695012 | orchestrator | 2025-10-08 16:01:22.695018 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-10-08 16:01:22.695025 | orchestrator | Wednesday 08 October 2025 16:00:06 +0000 (0:00:00.076) 0:01:54.116 ***** 2025-10-08 16:01:22.695031 | orchestrator | 2025-10-08 16:01:22.695037 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2025-10-08 16:01:22.695043 | orchestrator | Wednesday 08 October 2025 16:00:06 +0000 (0:00:00.070) 0:01:54.186 ***** 2025-10-08 16:01:22.695050 | orchestrator | changed: [testbed-node-0] 2025-10-08 16:01:22.695056 | orchestrator | changed: [testbed-node-2] 2025-10-08 16:01:22.695062 | orchestrator | changed: [testbed-node-1] 2025-10-08 16:01:22.695068 | orchestrator | 2025-10-08 16:01:22.695074 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2025-10-08 16:01:22.695081 | orchestrator | Wednesday 08 October 2025 16:00:16 +0000 (0:00:10.059) 0:02:04.248 ***** 2025-10-08 16:01:22.695087 | orchestrator | changed: [testbed-node-0] 2025-10-08 16:01:22.695093 | orchestrator | changed: [testbed-node-2] 2025-10-08 16:01:22.695099 | orchestrator | changed: [testbed-node-1] 2025-10-08 16:01:22.695105 | orchestrator | 2025-10-08 16:01:22.695112 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2025-10-08 16:01:22.695118 | orchestrator | Wednesday 08 October 2025 16:00:30 +0000 (0:00:13.808) 0:02:18.056 ***** 2025-10-08 16:01:22.695124 | orchestrator | changed: [testbed-node-0] 2025-10-08 16:01:22.695130 | orchestrator | changed: [testbed-node-2] 2025-10-08 16:01:22.695136 | orchestrator | changed: [testbed-node-1] 2025-10-08 16:01:22.695154 | orchestrator | 2025-10-08 16:01:22.695161 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2025-10-08 16:01:22.695167 | orchestrator | Wednesday 08 October 2025 16:00:37 +0000 (0:00:07.067) 0:02:25.124 ***** 2025-10-08 16:01:22.695173 | orchestrator | changed: [testbed-node-0] 2025-10-08 16:01:22.695179 | orchestrator | changed: [testbed-node-1] 2025-10-08 16:01:22.695185 | orchestrator | changed: [testbed-node-2] 2025-10-08 16:01:22.695192 | orchestrator | 2025-10-08 16:01:22.695198 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2025-10-08 16:01:22.695204 | orchestrator | Wednesday 08 October 2025 16:00:49 +0000 (0:00:12.269) 0:02:37.393 ***** 2025-10-08 16:01:22.695210 | orchestrator | changed: [testbed-node-0] 2025-10-08 16:01:22.695216 | orchestrator | changed: [testbed-node-1] 2025-10-08 16:01:22.695223 | orchestrator | changed: [testbed-node-2] 2025-10-08 16:01:22.695229 | orchestrator | 2025-10-08 16:01:22.695235 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2025-10-08 16:01:22.695241 | orchestrator | Wednesday 08 October 2025 16:01:00 +0000 (0:00:10.459) 0:02:47.853 ***** 2025-10-08 16:01:22.695247 | orchestrator | changed: [testbed-node-0] 2025-10-08 16:01:22.695254 | orchestrator | changed: [testbed-node-1] 2025-10-08 16:01:22.695264 | orchestrator | changed: [testbed-node-2] 2025-10-08 16:01:22.695270 | orchestrator | 2025-10-08 16:01:22.695276 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2025-10-08 16:01:22.695282 | orchestrator | Wednesday 08 October 2025 16:01:12 +0000 (0:00:12.435) 0:03:00.289 ***** 2025-10-08 16:01:22.695289 | orchestrator | changed: [testbed-node-0] 2025-10-08 16:01:22.695295 | orchestrator | 2025-10-08 16:01:22.695301 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-08 16:01:22.695307 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-10-08 16:01:22.695314 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-10-08 16:01:22.695320 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-10-08 16:01:22.695327 | orchestrator | 2025-10-08 16:01:22.695333 | orchestrator | 2025-10-08 16:01:22.695342 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-08 16:01:22.695349 | orchestrator | Wednesday 08 October 2025 16:01:19 +0000 (0:00:07.066) 0:03:07.355 ***** 2025-10-08 16:01:22.695355 | orchestrator | =============================================================================== 2025-10-08 16:01:22.695361 | orchestrator | designate : Copying over designate.conf -------------------------------- 19.55s 2025-10-08 16:01:22.695368 | orchestrator | designate : Running Designate bootstrap container ---------------------- 19.35s 2025-10-08 16:01:22.695374 | orchestrator | designate : Restart designate-api container ---------------------------- 13.81s 2025-10-08 16:01:22.695380 | orchestrator | designate : Restart designate-worker container ------------------------- 12.44s 2025-10-08 16:01:22.695389 | orchestrator | designate : Restart designate-producer container ----------------------- 12.27s 2025-10-08 16:01:22.695395 | orchestrator | designate : Restart designate-mdns container --------------------------- 10.46s 2025-10-08 16:01:22.695402 | orchestrator | designate : Restart designate-backend-bind9 container ------------------ 10.06s 2025-10-08 16:01:22.695408 | orchestrator | designate : Restart designate-central container ------------------------- 7.07s 2025-10-08 16:01:22.695414 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 7.07s 2025-10-08 16:01:22.695420 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 6.76s 2025-10-08 16:01:22.695426 | orchestrator | designate : Copying over config.json files for services ----------------- 6.32s 2025-10-08 16:01:22.695432 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 6.19s 2025-10-08 16:01:22.695439 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 5.97s 2025-10-08 16:01:22.695445 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 4.68s 2025-10-08 16:01:22.695451 | orchestrator | designate : Check designate containers ---------------------------------- 4.40s 2025-10-08 16:01:22.695457 | orchestrator | service-ks-register : designate | Creating users ------------------------ 4.04s 2025-10-08 16:01:22.695463 | orchestrator | service-ks-register : designate | Creating roles ------------------------ 3.96s 2025-10-08 16:01:22.695469 | orchestrator | designate : Ensuring config directories exist --------------------------- 3.64s 2025-10-08 16:01:22.695475 | orchestrator | service-ks-register : designate | Creating projects --------------------- 3.61s 2025-10-08 16:01:22.695482 | orchestrator | service-ks-register : designate | Creating services --------------------- 3.50s 2025-10-08 16:01:22.695488 | orchestrator | 2025-10-08 16:01:22 | INFO  | Task c9acb2ec-d69c-41f2-b09a-cfb2bdbcdae2 is in state STARTED 2025-10-08 16:01:22.695494 | orchestrator | 2025-10-08 16:01:22 | INFO  | Task c824450d-6fa3-4389-b285-cf9b5f049536 is in state STARTED 2025-10-08 16:01:22.695500 | orchestrator | 2025-10-08 16:01:22 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 16:01:22.695510 | orchestrator | 2025-10-08 16:01:22 | INFO  | Task 231d3cfe-0b3a-411a-bea9-6a1b5f37f4b6 is in state STARTED 2025-10-08 16:01:22.695517 | orchestrator | 2025-10-08 16:01:22 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:01:25.748391 | orchestrator | 2025-10-08 16:01:25 | INFO  | Task c9acb2ec-d69c-41f2-b09a-cfb2bdbcdae2 is in state STARTED 2025-10-08 16:01:25.751878 | orchestrator | 2025-10-08 16:01:25 | INFO  | Task c824450d-6fa3-4389-b285-cf9b5f049536 is in state STARTED 2025-10-08 16:01:25.754396 | orchestrator | 2025-10-08 16:01:25 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 16:01:25.757589 | orchestrator | 2025-10-08 16:01:25 | INFO  | Task 231d3cfe-0b3a-411a-bea9-6a1b5f37f4b6 is in state STARTED 2025-10-08 16:01:25.757614 | orchestrator | 2025-10-08 16:01:25 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:01:28.803462 | orchestrator | 2025-10-08 16:01:28 | INFO  | Task c9acb2ec-d69c-41f2-b09a-cfb2bdbcdae2 is in state STARTED 2025-10-08 16:01:28.806456 | orchestrator | 2025-10-08 16:01:28 | INFO  | Task c824450d-6fa3-4389-b285-cf9b5f049536 is in state STARTED 2025-10-08 16:01:28.809197 | orchestrator | 2025-10-08 16:01:28 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 16:01:28.811953 | orchestrator | 2025-10-08 16:01:28 | INFO  | Task 231d3cfe-0b3a-411a-bea9-6a1b5f37f4b6 is in state STARTED 2025-10-08 16:01:28.812115 | orchestrator | 2025-10-08 16:01:28 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:01:31.866065 | orchestrator | 2025-10-08 16:01:31 | INFO  | Task c9acb2ec-d69c-41f2-b09a-cfb2bdbcdae2 is in state STARTED 2025-10-08 16:01:31.867246 | orchestrator | 2025-10-08 16:01:31 | INFO  | Task c824450d-6fa3-4389-b285-cf9b5f049536 is in state STARTED 2025-10-08 16:01:31.868942 | orchestrator | 2025-10-08 16:01:31 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 16:01:31.870610 | orchestrator | 2025-10-08 16:01:31 | INFO  | Task 231d3cfe-0b3a-411a-bea9-6a1b5f37f4b6 is in state STARTED 2025-10-08 16:01:31.870631 | orchestrator | 2025-10-08 16:01:31 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:01:34.923228 | orchestrator | 2025-10-08 16:01:34 | INFO  | Task c9acb2ec-d69c-41f2-b09a-cfb2bdbcdae2 is in state STARTED 2025-10-08 16:01:34.924361 | orchestrator | 2025-10-08 16:01:34 | INFO  | Task c824450d-6fa3-4389-b285-cf9b5f049536 is in state STARTED 2025-10-08 16:01:34.926426 | orchestrator | 2025-10-08 16:01:34 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 16:01:34.928259 | orchestrator | 2025-10-08 16:01:34 | INFO  | Task 231d3cfe-0b3a-411a-bea9-6a1b5f37f4b6 is in state STARTED 2025-10-08 16:01:34.928914 | orchestrator | 2025-10-08 16:01:34 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:01:37.984378 | orchestrator | 2025-10-08 16:01:37 | INFO  | Task c9acb2ec-d69c-41f2-b09a-cfb2bdbcdae2 is in state STARTED 2025-10-08 16:01:37.985209 | orchestrator | 2025-10-08 16:01:37 | INFO  | Task c824450d-6fa3-4389-b285-cf9b5f049536 is in state STARTED 2025-10-08 16:01:37.988195 | orchestrator | 2025-10-08 16:01:37 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 16:01:37.989871 | orchestrator | 2025-10-08 16:01:37 | INFO  | Task 231d3cfe-0b3a-411a-bea9-6a1b5f37f4b6 is in state STARTED 2025-10-08 16:01:37.990102 | orchestrator | 2025-10-08 16:01:37 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:01:41.035731 | orchestrator | 2025-10-08 16:01:41 | INFO  | Task c9acb2ec-d69c-41f2-b09a-cfb2bdbcdae2 is in state STARTED 2025-10-08 16:01:41.035860 | orchestrator | 2025-10-08 16:01:41 | INFO  | Task c824450d-6fa3-4389-b285-cf9b5f049536 is in state STARTED 2025-10-08 16:01:41.035877 | orchestrator | 2025-10-08 16:01:41 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 16:01:41.037056 | orchestrator | 2025-10-08 16:01:41 | INFO  | Task 231d3cfe-0b3a-411a-bea9-6a1b5f37f4b6 is in state STARTED 2025-10-08 16:01:41.037080 | orchestrator | 2025-10-08 16:01:41 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:01:44.098213 | orchestrator | 2025-10-08 16:01:44 | INFO  | Task c9acb2ec-d69c-41f2-b09a-cfb2bdbcdae2 is in state STARTED 2025-10-08 16:01:44.099043 | orchestrator | 2025-10-08 16:01:44 | INFO  | Task c824450d-6fa3-4389-b285-cf9b5f049536 is in state STARTED 2025-10-08 16:01:44.100354 | orchestrator | 2025-10-08 16:01:44 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 16:01:44.103243 | orchestrator | 2025-10-08 16:01:44 | INFO  | Task 231d3cfe-0b3a-411a-bea9-6a1b5f37f4b6 is in state STARTED 2025-10-08 16:01:44.103270 | orchestrator | 2025-10-08 16:01:44 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:01:47.138855 | orchestrator | 2025-10-08 16:01:47 | INFO  | Task c9acb2ec-d69c-41f2-b09a-cfb2bdbcdae2 is in state STARTED 2025-10-08 16:01:47.140893 | orchestrator | 2025-10-08 16:01:47 | INFO  | Task c824450d-6fa3-4389-b285-cf9b5f049536 is in state STARTED 2025-10-08 16:01:47.142579 | orchestrator | 2025-10-08 16:01:47 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 16:01:47.145222 | orchestrator | 2025-10-08 16:01:47 | INFO  | Task 231d3cfe-0b3a-411a-bea9-6a1b5f37f4b6 is in state STARTED 2025-10-08 16:01:47.145310 | orchestrator | 2025-10-08 16:01:47 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:01:50.195254 | orchestrator | 2025-10-08 16:01:50 | INFO  | Task c9acb2ec-d69c-41f2-b09a-cfb2bdbcdae2 is in state STARTED 2025-10-08 16:01:50.196766 | orchestrator | 2025-10-08 16:01:50 | INFO  | Task c824450d-6fa3-4389-b285-cf9b5f049536 is in state STARTED 2025-10-08 16:01:50.199219 | orchestrator | 2025-10-08 16:01:50 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 16:01:50.201243 | orchestrator | 2025-10-08 16:01:50 | INFO  | Task 231d3cfe-0b3a-411a-bea9-6a1b5f37f4b6 is in state STARTED 2025-10-08 16:01:50.201277 | orchestrator | 2025-10-08 16:01:50 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:01:53.249080 | orchestrator | 2025-10-08 16:01:53 | INFO  | Task c9acb2ec-d69c-41f2-b09a-cfb2bdbcdae2 is in state STARTED 2025-10-08 16:01:53.250691 | orchestrator | 2025-10-08 16:01:53 | INFO  | Task c824450d-6fa3-4389-b285-cf9b5f049536 is in state STARTED 2025-10-08 16:01:53.252354 | orchestrator | 2025-10-08 16:01:53 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 16:01:53.254172 | orchestrator | 2025-10-08 16:01:53 | INFO  | Task 231d3cfe-0b3a-411a-bea9-6a1b5f37f4b6 is in state STARTED 2025-10-08 16:01:53.254200 | orchestrator | 2025-10-08 16:01:53 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:01:56.301397 | orchestrator | 2025-10-08 16:01:56 | INFO  | Task c9acb2ec-d69c-41f2-b09a-cfb2bdbcdae2 is in state STARTED 2025-10-08 16:01:56.302229 | orchestrator | 2025-10-08 16:01:56 | INFO  | Task c824450d-6fa3-4389-b285-cf9b5f049536 is in state STARTED 2025-10-08 16:01:56.304346 | orchestrator | 2025-10-08 16:01:56 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 16:01:56.306127 | orchestrator | 2025-10-08 16:01:56 | INFO  | Task 231d3cfe-0b3a-411a-bea9-6a1b5f37f4b6 is in state STARTED 2025-10-08 16:01:56.306546 | orchestrator | 2025-10-08 16:01:56 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:01:59.351546 | orchestrator | 2025-10-08 16:01:59 | INFO  | Task c9acb2ec-d69c-41f2-b09a-cfb2bdbcdae2 is in state STARTED 2025-10-08 16:01:59.352355 | orchestrator | 2025-10-08 16:01:59 | INFO  | Task c824450d-6fa3-4389-b285-cf9b5f049536 is in state STARTED 2025-10-08 16:01:59.353189 | orchestrator | 2025-10-08 16:01:59 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 16:01:59.354373 | orchestrator | 2025-10-08 16:01:59 | INFO  | Task 231d3cfe-0b3a-411a-bea9-6a1b5f37f4b6 is in state STARTED 2025-10-08 16:01:59.354455 | orchestrator | 2025-10-08 16:01:59 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:02:02.399254 | orchestrator | 2025-10-08 16:02:02 | INFO  | Task c9acb2ec-d69c-41f2-b09a-cfb2bdbcdae2 is in state STARTED 2025-10-08 16:02:02.401502 | orchestrator | 2025-10-08 16:02:02 | INFO  | Task c824450d-6fa3-4389-b285-cf9b5f049536 is in state STARTED 2025-10-08 16:02:02.403209 | orchestrator | 2025-10-08 16:02:02 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 16:02:02.406283 | orchestrator | 2025-10-08 16:02:02 | INFO  | Task 231d3cfe-0b3a-411a-bea9-6a1b5f37f4b6 is in state STARTED 2025-10-08 16:02:02.406310 | orchestrator | 2025-10-08 16:02:02 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:02:05.454260 | orchestrator | 2025-10-08 16:02:05 | INFO  | Task c9acb2ec-d69c-41f2-b09a-cfb2bdbcdae2 is in state STARTED 2025-10-08 16:02:05.455369 | orchestrator | 2025-10-08 16:02:05 | INFO  | Task c824450d-6fa3-4389-b285-cf9b5f049536 is in state STARTED 2025-10-08 16:02:05.456944 | orchestrator | 2025-10-08 16:02:05 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 16:02:05.457822 | orchestrator | 2025-10-08 16:02:05 | INFO  | Task 231d3cfe-0b3a-411a-bea9-6a1b5f37f4b6 is in state STARTED 2025-10-08 16:02:05.458611 | orchestrator | 2025-10-08 16:02:05 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:02:08.505359 | orchestrator | 2025-10-08 16:02:08 | INFO  | Task c9acb2ec-d69c-41f2-b09a-cfb2bdbcdae2 is in state STARTED 2025-10-08 16:02:08.506909 | orchestrator | 2025-10-08 16:02:08 | INFO  | Task c824450d-6fa3-4389-b285-cf9b5f049536 is in state STARTED 2025-10-08 16:02:08.509718 | orchestrator | 2025-10-08 16:02:08 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 16:02:08.511934 | orchestrator | 2025-10-08 16:02:08 | INFO  | Task 231d3cfe-0b3a-411a-bea9-6a1b5f37f4b6 is in state STARTED 2025-10-08 16:02:08.512047 | orchestrator | 2025-10-08 16:02:08 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:02:11.555020 | orchestrator | 2025-10-08 16:02:11 | INFO  | Task c9acb2ec-d69c-41f2-b09a-cfb2bdbcdae2 is in state STARTED 2025-10-08 16:02:11.558136 | orchestrator | 2025-10-08 16:02:11 | INFO  | Task c824450d-6fa3-4389-b285-cf9b5f049536 is in state STARTED 2025-10-08 16:02:11.559141 | orchestrator | 2025-10-08 16:02:11 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 16:02:11.562075 | orchestrator | 2025-10-08 16:02:11 | INFO  | Task 231d3cfe-0b3a-411a-bea9-6a1b5f37f4b6 is in state STARTED 2025-10-08 16:02:11.562463 | orchestrator | 2025-10-08 16:02:11 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:02:14.605708 | orchestrator | 2025-10-08 16:02:14 | INFO  | Task c9acb2ec-d69c-41f2-b09a-cfb2bdbcdae2 is in state STARTED 2025-10-08 16:02:14.605939 | orchestrator | 2025-10-08 16:02:14 | INFO  | Task c824450d-6fa3-4389-b285-cf9b5f049536 is in state STARTED 2025-10-08 16:02:14.605967 | orchestrator | 2025-10-08 16:02:14 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 16:02:14.608783 | orchestrator | 2025-10-08 16:02:14 | INFO  | Task 231d3cfe-0b3a-411a-bea9-6a1b5f37f4b6 is in state STARTED 2025-10-08 16:02:14.608809 | orchestrator | 2025-10-08 16:02:14 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:02:17.670752 | orchestrator | 2025-10-08 16:02:17 | INFO  | Task c9acb2ec-d69c-41f2-b09a-cfb2bdbcdae2 is in state STARTED 2025-10-08 16:02:17.672697 | orchestrator | 2025-10-08 16:02:17 | INFO  | Task c824450d-6fa3-4389-b285-cf9b5f049536 is in state SUCCESS 2025-10-08 16:02:17.674239 | orchestrator | 2025-10-08 16:02:17.674279 | orchestrator | 2025-10-08 16:02:17.674290 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-10-08 16:02:17.674302 | orchestrator | 2025-10-08 16:02:17.674312 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-10-08 16:02:17.674323 | orchestrator | Wednesday 08 October 2025 16:01:08 +0000 (0:00:00.267) 0:00:00.267 ***** 2025-10-08 16:02:17.674334 | orchestrator | ok: [testbed-node-0] 2025-10-08 16:02:17.674345 | orchestrator | ok: [testbed-node-1] 2025-10-08 16:02:17.674355 | orchestrator | ok: [testbed-node-2] 2025-10-08 16:02:17.674365 | orchestrator | 2025-10-08 16:02:17.674390 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-10-08 16:02:17.674400 | orchestrator | Wednesday 08 October 2025 16:01:08 +0000 (0:00:00.335) 0:00:00.603 ***** 2025-10-08 16:02:17.674411 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2025-10-08 16:02:17.674421 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2025-10-08 16:02:17.674431 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2025-10-08 16:02:17.674442 | orchestrator | 2025-10-08 16:02:17.674451 | orchestrator | PLAY [Apply role placement] **************************************************** 2025-10-08 16:02:17.674461 | orchestrator | 2025-10-08 16:02:17.674471 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-10-08 16:02:17.674481 | orchestrator | Wednesday 08 October 2025 16:01:08 +0000 (0:00:00.450) 0:00:01.054 ***** 2025-10-08 16:02:17.674491 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-08 16:02:17.674501 | orchestrator | 2025-10-08 16:02:17.674511 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2025-10-08 16:02:17.674521 | orchestrator | Wednesday 08 October 2025 16:01:09 +0000 (0:00:00.547) 0:00:01.601 ***** 2025-10-08 16:02:17.674531 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2025-10-08 16:02:17.674541 | orchestrator | 2025-10-08 16:02:17.674550 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2025-10-08 16:02:17.674560 | orchestrator | Wednesday 08 October 2025 16:01:13 +0000 (0:00:03.571) 0:00:05.173 ***** 2025-10-08 16:02:17.674570 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2025-10-08 16:02:17.674581 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2025-10-08 16:02:17.674591 | orchestrator | 2025-10-08 16:02:17.674600 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2025-10-08 16:02:17.674610 | orchestrator | Wednesday 08 October 2025 16:01:19 +0000 (0:00:06.059) 0:00:11.232 ***** 2025-10-08 16:02:17.674620 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-10-08 16:02:17.674630 | orchestrator | 2025-10-08 16:02:17.674641 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2025-10-08 16:02:17.674651 | orchestrator | Wednesday 08 October 2025 16:01:22 +0000 (0:00:03.451) 0:00:14.684 ***** 2025-10-08 16:02:17.674661 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-10-08 16:02:17.674670 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2025-10-08 16:02:17.674680 | orchestrator | 2025-10-08 16:02:17.674690 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2025-10-08 16:02:17.674700 | orchestrator | Wednesday 08 October 2025 16:01:26 +0000 (0:00:04.265) 0:00:18.950 ***** 2025-10-08 16:02:17.674730 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-10-08 16:02:17.674740 | orchestrator | 2025-10-08 16:02:17.674750 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2025-10-08 16:02:17.674760 | orchestrator | Wednesday 08 October 2025 16:01:30 +0000 (0:00:03.823) 0:00:22.773 ***** 2025-10-08 16:02:17.674770 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2025-10-08 16:02:17.674780 | orchestrator | 2025-10-08 16:02:17.674789 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-10-08 16:02:17.674799 | orchestrator | Wednesday 08 October 2025 16:01:34 +0000 (0:00:04.305) 0:00:27.079 ***** 2025-10-08 16:02:17.674808 | orchestrator | skipping: [testbed-node-0] 2025-10-08 16:02:17.674818 | orchestrator | skipping: [testbed-node-1] 2025-10-08 16:02:17.674828 | orchestrator | skipping: [testbed-node-2] 2025-10-08 16:02:17.674838 | orchestrator | 2025-10-08 16:02:17.674847 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2025-10-08 16:02:17.674857 | orchestrator | Wednesday 08 October 2025 16:01:35 +0000 (0:00:00.295) 0:00:27.374 ***** 2025-10-08 16:02:17.674871 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-10-08 16:02:17.674903 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-10-08 16:02:17.674915 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-10-08 16:02:17.674926 | orchestrator | 2025-10-08 16:02:17.674935 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2025-10-08 16:02:17.674952 | orchestrator | Wednesday 08 October 2025 16:01:36 +0000 (0:00:00.889) 0:00:28.263 ***** 2025-10-08 16:02:17.674962 | orchestrator | skipping: [testbed-node-0] 2025-10-08 16:02:17.674972 | orchestrator | 2025-10-08 16:02:17.674982 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2025-10-08 16:02:17.674991 | orchestrator | Wednesday 08 October 2025 16:01:36 +0000 (0:00:00.147) 0:00:28.411 ***** 2025-10-08 16:02:17.675001 | orchestrator | skipping: [testbed-node-0] 2025-10-08 16:02:17.675011 | orchestrator | skipping: [testbed-node-1] 2025-10-08 16:02:17.675020 | orchestrator | skipping: [testbed-node-2] 2025-10-08 16:02:17.675030 | orchestrator | 2025-10-08 16:02:17.675040 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-10-08 16:02:17.675049 | orchestrator | Wednesday 08 October 2025 16:01:36 +0000 (0:00:00.480) 0:00:28.891 ***** 2025-10-08 16:02:17.675059 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-08 16:02:17.675069 | orchestrator | 2025-10-08 16:02:17.675078 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2025-10-08 16:02:17.675088 | orchestrator | Wednesday 08 October 2025 16:01:37 +0000 (0:00:00.530) 0:00:29.422 ***** 2025-10-08 16:02:17.675098 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-10-08 16:02:17.675117 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-10-08 16:02:17.675132 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-10-08 16:02:17.675184 | orchestrator | 2025-10-08 16:02:17.675195 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2025-10-08 16:02:17.675205 | orchestrator | Wednesday 08 October 2025 16:01:38 +0000 (0:00:01.510) 0:00:30.932 ***** 2025-10-08 16:02:17.675215 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-10-08 16:02:17.675225 | orchestrator | skipping: [testbed-node-0] 2025-10-08 16:02:17.675236 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-10-08 16:02:17.675246 | orchestrator | skipping: [testbed-node-2] 2025-10-08 16:02:17.675263 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-10-08 16:02:17.675273 | orchestrator | skipping: [testbed-node-1] 2025-10-08 16:02:17.675283 | orchestrator | 2025-10-08 16:02:17.675293 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2025-10-08 16:02:17.675302 | orchestrator | Wednesday 08 October 2025 16:01:39 +0000 (0:00:00.974) 0:00:31.907 ***** 2025-10-08 16:02:17.675317 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-10-08 16:02:17.675334 | orchestrator | skipping: [testbed-node-0] 2025-10-08 16:02:17.675344 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-10-08 16:02:17.675354 | orchestrator | skipping: [testbed-node-1] 2025-10-08 16:02:17.675364 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-10-08 16:02:17.675375 | orchestrator | skipping: [testbed-node-2] 2025-10-08 16:02:17.675384 | orchestrator | 2025-10-08 16:02:17.675394 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2025-10-08 16:02:17.675404 | orchestrator | Wednesday 08 October 2025 16:01:40 +0000 (0:00:00.707) 0:00:32.614 ***** 2025-10-08 16:02:17.675419 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-10-08 16:02:17.675434 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-10-08 16:02:17.675451 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-10-08 16:02:17.675462 | orchestrator | 2025-10-08 16:02:17.675471 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2025-10-08 16:02:17.675481 | orchestrator | Wednesday 08 October 2025 16:01:41 +0000 (0:00:01.367) 0:00:33.982 ***** 2025-10-08 16:02:17.675491 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-10-08 16:02:17.675502 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-10-08 16:02:17.675524 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-10-08 16:02:17.675542 | orchestrator | 2025-10-08 16:02:17.675551 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2025-10-08 16:02:17.675561 | orchestrator | Wednesday 08 October 2025 16:01:44 +0000 (0:00:02.648) 0:00:36.630 ***** 2025-10-08 16:02:17.675571 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-10-08 16:02:17.675581 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-10-08 16:02:17.675591 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-10-08 16:02:17.675601 | orchestrator | 2025-10-08 16:02:17.675610 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2025-10-08 16:02:17.675620 | orchestrator | Wednesday 08 October 2025 16:01:46 +0000 (0:00:01.642) 0:00:38.273 ***** 2025-10-08 16:02:17.675630 | orchestrator | changed: [testbed-node-0] 2025-10-08 16:02:17.675640 | orchestrator | changed: [testbed-node-1] 2025-10-08 16:02:17.675649 | orchestrator | changed: [testbed-node-2] 2025-10-08 16:02:17.675711 | orchestrator | 2025-10-08 16:02:17.675830 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2025-10-08 16:02:17.675845 | orchestrator | Wednesday 08 October 2025 16:01:47 +0000 (0:00:01.456) 0:00:39.730 ***** 2025-10-08 16:02:17.675856 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-10-08 16:02:17.675867 | orchestrator | skipping: [testbed-node-0] 2025-10-08 16:02:17.675877 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-10-08 16:02:17.675887 | orchestrator | skipping: [testbed-node-1] 2025-10-08 16:02:17.675906 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-10-08 16:02:17.675926 | orchestrator | skipping: [testbed-node-2] 2025-10-08 16:02:17.675935 | orchestrator | 2025-10-08 16:02:17.675945 | orchestrator | TASK [placement : Check placement containers] ********************************** 2025-10-08 16:02:17.675955 | orchestrator | Wednesday 08 October 2025 16:01:48 +0000 (0:00:00.514) 0:00:40.244 ***** 2025-10-08 16:02:17.675971 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-10-08 16:02:17.675982 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-10-08 16:02:17.675992 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-10-08 16:02:17.676002 | orchestrator | 2025-10-08 16:02:17.676012 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2025-10-08 16:02:17.676022 | orchestrator | Wednesday 08 October 2025 16:01:49 +0000 (0:00:01.260) 0:00:41.505 ***** 2025-10-08 16:02:17.676031 | orchestrator | changed: [testbed-node-0] 2025-10-08 16:02:17.676041 | orchestrator | 2025-10-08 16:02:17.676051 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2025-10-08 16:02:17.676060 | orchestrator | Wednesday 08 October 2025 16:01:52 +0000 (0:00:03.105) 0:00:44.610 ***** 2025-10-08 16:02:17.676082 | orchestrator | changed: [testbed-node-0] 2025-10-08 16:02:17.676092 | orchestrator | 2025-10-08 16:02:17.676102 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2025-10-08 16:02:17.676111 | orchestrator | Wednesday 08 October 2025 16:01:54 +0000 (0:00:02.496) 0:00:47.107 ***** 2025-10-08 16:02:17.676121 | orchestrator | changed: [testbed-node-0] 2025-10-08 16:02:17.676131 | orchestrator | 2025-10-08 16:02:17.676140 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-10-08 16:02:17.676171 | orchestrator | Wednesday 08 October 2025 16:02:09 +0000 (0:00:14.530) 0:01:01.637 ***** 2025-10-08 16:02:17.676181 | orchestrator | 2025-10-08 16:02:17.676190 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-10-08 16:02:17.676200 | orchestrator | Wednesday 08 October 2025 16:02:09 +0000 (0:00:00.075) 0:01:01.713 ***** 2025-10-08 16:02:17.676210 | orchestrator | 2025-10-08 16:02:17.676225 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-10-08 16:02:17.676235 | orchestrator | Wednesday 08 October 2025 16:02:09 +0000 (0:00:00.092) 0:01:01.806 ***** 2025-10-08 16:02:17.676245 | orchestrator | 2025-10-08 16:02:17.676255 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2025-10-08 16:02:17.676264 | orchestrator | Wednesday 08 October 2025 16:02:09 +0000 (0:00:00.081) 0:01:01.887 ***** 2025-10-08 16:02:17.676274 | orchestrator | changed: [testbed-node-1] 2025-10-08 16:02:17.676284 | orchestrator | changed: [testbed-node-2] 2025-10-08 16:02:17.676293 | orchestrator | changed: [testbed-node-0] 2025-10-08 16:02:17.676303 | orchestrator | 2025-10-08 16:02:17.676317 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-08 16:02:17.676328 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-10-08 16:02:17.676339 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-10-08 16:02:17.676348 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-10-08 16:02:17.676358 | orchestrator | 2025-10-08 16:02:17.676368 | orchestrator | 2025-10-08 16:02:17.676378 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-08 16:02:17.676387 | orchestrator | Wednesday 08 October 2025 16:02:17 +0000 (0:00:07.388) 0:01:09.276 ***** 2025-10-08 16:02:17.676397 | orchestrator | =============================================================================== 2025-10-08 16:02:17.676407 | orchestrator | placement : Running placement bootstrap container ---------------------- 14.53s 2025-10-08 16:02:17.676416 | orchestrator | placement : Restart placement-api container ----------------------------- 7.39s 2025-10-08 16:02:17.676426 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 6.06s 2025-10-08 16:02:17.676443 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 4.31s 2025-10-08 16:02:17.676461 | orchestrator | service-ks-register : placement | Creating users ------------------------ 4.27s 2025-10-08 16:02:17.676478 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.82s 2025-10-08 16:02:17.676495 | orchestrator | service-ks-register : placement | Creating services --------------------- 3.57s 2025-10-08 16:02:17.676514 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.45s 2025-10-08 16:02:17.676532 | orchestrator | placement : Creating placement databases -------------------------------- 3.11s 2025-10-08 16:02:17.676546 | orchestrator | placement : Copying over placement.conf --------------------------------- 2.65s 2025-10-08 16:02:17.676559 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.50s 2025-10-08 16:02:17.676570 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.64s 2025-10-08 16:02:17.676581 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.51s 2025-10-08 16:02:17.676600 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.46s 2025-10-08 16:02:17.676611 | orchestrator | placement : Copying over config.json files for services ----------------- 1.37s 2025-10-08 16:02:17.676622 | orchestrator | placement : Check placement containers ---------------------------------- 1.26s 2025-10-08 16:02:17.676633 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS certificate --- 0.98s 2025-10-08 16:02:17.676644 | orchestrator | placement : Ensuring config directories exist --------------------------- 0.89s 2025-10-08 16:02:17.676655 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 0.71s 2025-10-08 16:02:17.676665 | orchestrator | placement : include_tasks ----------------------------------------------- 0.55s 2025-10-08 16:02:17.676677 | orchestrator | 2025-10-08 16:02:17 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 16:02:17.679631 | orchestrator | 2025-10-08 16:02:17 | INFO  | Task 231d3cfe-0b3a-411a-bea9-6a1b5f37f4b6 is in state STARTED 2025-10-08 16:02:17.679974 | orchestrator | 2025-10-08 16:02:17 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:02:20.709800 | orchestrator | 2025-10-08 16:02:20 | INFO  | Task c9acb2ec-d69c-41f2-b09a-cfb2bdbcdae2 is in state STARTED 2025-10-08 16:02:20.709899 | orchestrator | 2025-10-08 16:02:20 | INFO  | Task 9743aaeb-e817-41b7-b0b1-71e5050ad45c is in state STARTED 2025-10-08 16:02:20.710458 | orchestrator | 2025-10-08 16:02:20 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 16:02:20.711597 | orchestrator | 2025-10-08 16:02:20 | INFO  | Task 231d3cfe-0b3a-411a-bea9-6a1b5f37f4b6 is in state STARTED 2025-10-08 16:02:20.711619 | orchestrator | 2025-10-08 16:02:20 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:02:23.786426 | orchestrator | 2025-10-08 16:02:23 | INFO  | Task c9acb2ec-d69c-41f2-b09a-cfb2bdbcdae2 is in state STARTED 2025-10-08 16:02:23.787225 | orchestrator | 2025-10-08 16:02:23 | INFO  | Task 9743aaeb-e817-41b7-b0b1-71e5050ad45c is in state STARTED 2025-10-08 16:02:23.788265 | orchestrator | 2025-10-08 16:02:23 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 16:02:23.789113 | orchestrator | 2025-10-08 16:02:23 | INFO  | Task 231d3cfe-0b3a-411a-bea9-6a1b5f37f4b6 is in state STARTED 2025-10-08 16:02:23.789136 | orchestrator | 2025-10-08 16:02:23 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:02:26.820810 | orchestrator | 2025-10-08 16:02:26 | INFO  | Task c9acb2ec-d69c-41f2-b09a-cfb2bdbcdae2 is in state STARTED 2025-10-08 16:02:26.822233 | orchestrator | 2025-10-08 16:02:26 | INFO  | Task 9743aaeb-e817-41b7-b0b1-71e5050ad45c is in state STARTED 2025-10-08 16:02:26.823880 | orchestrator | 2025-10-08 16:02:26 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 16:02:26.825570 | orchestrator | 2025-10-08 16:02:26 | INFO  | Task 231d3cfe-0b3a-411a-bea9-6a1b5f37f4b6 is in state STARTED 2025-10-08 16:02:26.825624 | orchestrator | 2025-10-08 16:02:26 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:02:29.859892 | orchestrator | 2025-10-08 16:02:29 | INFO  | Task c9acb2ec-d69c-41f2-b09a-cfb2bdbcdae2 is in state STARTED 2025-10-08 16:02:29.860903 | orchestrator | 2025-10-08 16:02:29 | INFO  | Task 9743aaeb-e817-41b7-b0b1-71e5050ad45c is in state STARTED 2025-10-08 16:02:29.863048 | orchestrator | 2025-10-08 16:02:29 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 16:02:29.864500 | orchestrator | 2025-10-08 16:02:29 | INFO  | Task 231d3cfe-0b3a-411a-bea9-6a1b5f37f4b6 is in state STARTED 2025-10-08 16:02:29.864837 | orchestrator | 2025-10-08 16:02:29 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:02:32.913121 | orchestrator | 2025-10-08 16:02:32 | INFO  | Task c9acb2ec-d69c-41f2-b09a-cfb2bdbcdae2 is in state STARTED 2025-10-08 16:02:32.916728 | orchestrator | 2025-10-08 16:02:32 | INFO  | Task 9743aaeb-e817-41b7-b0b1-71e5050ad45c is in state STARTED 2025-10-08 16:02:32.918957 | orchestrator | 2025-10-08 16:02:32 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 16:02:32.921005 | orchestrator | 2025-10-08 16:02:32 | INFO  | Task 231d3cfe-0b3a-411a-bea9-6a1b5f37f4b6 is in state STARTED 2025-10-08 16:02:32.921035 | orchestrator | 2025-10-08 16:02:32 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:02:35.963059 | orchestrator | 2025-10-08 16:02:35 | INFO  | Task c9acb2ec-d69c-41f2-b09a-cfb2bdbcdae2 is in state STARTED 2025-10-08 16:02:35.963379 | orchestrator | 2025-10-08 16:02:35 | INFO  | Task 9743aaeb-e817-41b7-b0b1-71e5050ad45c is in state STARTED 2025-10-08 16:02:35.964011 | orchestrator | 2025-10-08 16:02:35 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 16:02:35.964696 | orchestrator | 2025-10-08 16:02:35 | INFO  | Task 231d3cfe-0b3a-411a-bea9-6a1b5f37f4b6 is in state STARTED 2025-10-08 16:02:35.964723 | orchestrator | 2025-10-08 16:02:35 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:02:39.009650 | orchestrator | 2025-10-08 16:02:39 | INFO  | Task c9acb2ec-d69c-41f2-b09a-cfb2bdbcdae2 is in state STARTED 2025-10-08 16:02:39.011559 | orchestrator | 2025-10-08 16:02:39 | INFO  | Task 9743aaeb-e817-41b7-b0b1-71e5050ad45c is in state STARTED 2025-10-08 16:02:39.012986 | orchestrator | 2025-10-08 16:02:39 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 16:02:39.015598 | orchestrator | 2025-10-08 16:02:39 | INFO  | Task 231d3cfe-0b3a-411a-bea9-6a1b5f37f4b6 is in state STARTED 2025-10-08 16:02:39.016071 | orchestrator | 2025-10-08 16:02:39 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:02:42.060351 | orchestrator | 2025-10-08 16:02:42 | INFO  | Task c9acb2ec-d69c-41f2-b09a-cfb2bdbcdae2 is in state STARTED 2025-10-08 16:02:42.063787 | orchestrator | 2025-10-08 16:02:42 | INFO  | Task 9743aaeb-e817-41b7-b0b1-71e5050ad45c is in state STARTED 2025-10-08 16:02:42.063982 | orchestrator | 2025-10-08 16:02:42 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 16:02:42.066066 | orchestrator | 2025-10-08 16:02:42 | INFO  | Task 231d3cfe-0b3a-411a-bea9-6a1b5f37f4b6 is in state STARTED 2025-10-08 16:02:42.066096 | orchestrator | 2025-10-08 16:02:42 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:02:45.096033 | orchestrator | 2025-10-08 16:02:45 | INFO  | Task c9acb2ec-d69c-41f2-b09a-cfb2bdbcdae2 is in state STARTED 2025-10-08 16:02:45.096136 | orchestrator | 2025-10-08 16:02:45 | INFO  | Task 9743aaeb-e817-41b7-b0b1-71e5050ad45c is in state STARTED 2025-10-08 16:02:45.098191 | orchestrator | 2025-10-08 16:02:45 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 16:02:45.099016 | orchestrator | 2025-10-08 16:02:45 | INFO  | Task 231d3cfe-0b3a-411a-bea9-6a1b5f37f4b6 is in state STARTED 2025-10-08 16:02:45.099039 | orchestrator | 2025-10-08 16:02:45 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:02:48.140521 | orchestrator | 2025-10-08 16:02:48 | INFO  | Task c9acb2ec-d69c-41f2-b09a-cfb2bdbcdae2 is in state STARTED 2025-10-08 16:02:48.140826 | orchestrator | 2025-10-08 16:02:48 | INFO  | Task 9743aaeb-e817-41b7-b0b1-71e5050ad45c is in state STARTED 2025-10-08 16:02:48.141536 | orchestrator | 2025-10-08 16:02:48 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 16:02:48.142312 | orchestrator | 2025-10-08 16:02:48 | INFO  | Task 231d3cfe-0b3a-411a-bea9-6a1b5f37f4b6 is in state STARTED 2025-10-08 16:02:48.142368 | orchestrator | 2025-10-08 16:02:48 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:02:51.180357 | orchestrator | 2025-10-08 16:02:51 | INFO  | Task c9acb2ec-d69c-41f2-b09a-cfb2bdbcdae2 is in state STARTED 2025-10-08 16:02:51.180578 | orchestrator | 2025-10-08 16:02:51 | INFO  | Task 9743aaeb-e817-41b7-b0b1-71e5050ad45c is in state STARTED 2025-10-08 16:02:51.180611 | orchestrator | 2025-10-08 16:02:51 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 16:02:51.181193 | orchestrator | 2025-10-08 16:02:51 | INFO  | Task 231d3cfe-0b3a-411a-bea9-6a1b5f37f4b6 is in state STARTED 2025-10-08 16:02:51.181217 | orchestrator | 2025-10-08 16:02:51 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:02:54.203178 | orchestrator | 2025-10-08 16:02:54 | INFO  | Task c9acb2ec-d69c-41f2-b09a-cfb2bdbcdae2 is in state STARTED 2025-10-08 16:02:54.203409 | orchestrator | 2025-10-08 16:02:54 | INFO  | Task 9743aaeb-e817-41b7-b0b1-71e5050ad45c is in state STARTED 2025-10-08 16:02:54.203871 | orchestrator | 2025-10-08 16:02:54 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 16:02:54.204531 | orchestrator | 2025-10-08 16:02:54 | INFO  | Task 231d3cfe-0b3a-411a-bea9-6a1b5f37f4b6 is in state STARTED 2025-10-08 16:02:54.204629 | orchestrator | 2025-10-08 16:02:54 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:02:57.291549 | orchestrator | 2025-10-08 16:02:57 | INFO  | Task c9acb2ec-d69c-41f2-b09a-cfb2bdbcdae2 is in state STARTED 2025-10-08 16:02:57.292378 | orchestrator | 2025-10-08 16:02:57 | INFO  | Task 9743aaeb-e817-41b7-b0b1-71e5050ad45c is in state STARTED 2025-10-08 16:02:57.292830 | orchestrator | 2025-10-08 16:02:57 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 16:02:57.293577 | orchestrator | 2025-10-08 16:02:57 | INFO  | Task 231d3cfe-0b3a-411a-bea9-6a1b5f37f4b6 is in state STARTED 2025-10-08 16:02:57.293808 | orchestrator | 2025-10-08 16:02:57 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:03:00.322516 | orchestrator | 2025-10-08 16:03:00 | INFO  | Task c9acb2ec-d69c-41f2-b09a-cfb2bdbcdae2 is in state STARTED 2025-10-08 16:03:00.322920 | orchestrator | 2025-10-08 16:03:00 | INFO  | Task 9743aaeb-e817-41b7-b0b1-71e5050ad45c is in state STARTED 2025-10-08 16:03:00.324436 | orchestrator | 2025-10-08 16:03:00 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 16:03:00.324913 | orchestrator | 2025-10-08 16:03:00 | INFO  | Task 231d3cfe-0b3a-411a-bea9-6a1b5f37f4b6 is in state STARTED 2025-10-08 16:03:00.324949 | orchestrator | 2025-10-08 16:03:00 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:03:03.378577 | orchestrator | 2025-10-08 16:03:03 | INFO  | Task c9acb2ec-d69c-41f2-b09a-cfb2bdbcdae2 is in state STARTED 2025-10-08 16:03:03.382071 | orchestrator | 2025-10-08 16:03:03 | INFO  | Task 9743aaeb-e817-41b7-b0b1-71e5050ad45c is in state STARTED 2025-10-08 16:03:03.384198 | orchestrator | 2025-10-08 16:03:03 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 16:03:03.387734 | orchestrator | 2025-10-08 16:03:03 | INFO  | Task 231d3cfe-0b3a-411a-bea9-6a1b5f37f4b6 is in state STARTED 2025-10-08 16:03:03.387759 | orchestrator | 2025-10-08 16:03:03 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:03:06.436394 | orchestrator | 2025-10-08 16:03:06 | INFO  | Task c9acb2ec-d69c-41f2-b09a-cfb2bdbcdae2 is in state STARTED 2025-10-08 16:03:06.437208 | orchestrator | 2025-10-08 16:03:06 | INFO  | Task 9743aaeb-e817-41b7-b0b1-71e5050ad45c is in state STARTED 2025-10-08 16:03:06.437707 | orchestrator | 2025-10-08 16:03:06 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 16:03:06.438931 | orchestrator | 2025-10-08 16:03:06 | INFO  | Task 231d3cfe-0b3a-411a-bea9-6a1b5f37f4b6 is in state STARTED 2025-10-08 16:03:06.438954 | orchestrator | 2025-10-08 16:03:06 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:03:09.482896 | orchestrator | 2025-10-08 16:03:09 | INFO  | Task c9acb2ec-d69c-41f2-b09a-cfb2bdbcdae2 is in state STARTED 2025-10-08 16:03:09.484774 | orchestrator | 2025-10-08 16:03:09 | INFO  | Task 9743aaeb-e817-41b7-b0b1-71e5050ad45c is in state STARTED 2025-10-08 16:03:09.487282 | orchestrator | 2025-10-08 16:03:09 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 16:03:09.489433 | orchestrator | 2025-10-08 16:03:09 | INFO  | Task 231d3cfe-0b3a-411a-bea9-6a1b5f37f4b6 is in state STARTED 2025-10-08 16:03:09.489638 | orchestrator | 2025-10-08 16:03:09 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:03:12.539439 | orchestrator | 2025-10-08 16:03:12 | INFO  | Task c9acb2ec-d69c-41f2-b09a-cfb2bdbcdae2 is in state STARTED 2025-10-08 16:03:12.540452 | orchestrator | 2025-10-08 16:03:12 | INFO  | Task 9743aaeb-e817-41b7-b0b1-71e5050ad45c is in state STARTED 2025-10-08 16:03:12.542813 | orchestrator | 2025-10-08 16:03:12 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 16:03:12.544718 | orchestrator | 2025-10-08 16:03:12 | INFO  | Task 231d3cfe-0b3a-411a-bea9-6a1b5f37f4b6 is in state STARTED 2025-10-08 16:03:12.544811 | orchestrator | 2025-10-08 16:03:12 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:03:15.613603 | orchestrator | 2025-10-08 16:03:15 | INFO  | Task c9acb2ec-d69c-41f2-b09a-cfb2bdbcdae2 is in state STARTED 2025-10-08 16:03:15.613703 | orchestrator | 2025-10-08 16:03:15 | INFO  | Task 9743aaeb-e817-41b7-b0b1-71e5050ad45c is in state STARTED 2025-10-08 16:03:15.613717 | orchestrator | 2025-10-08 16:03:15 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 16:03:15.613727 | orchestrator | 2025-10-08 16:03:15 | INFO  | Task 231d3cfe-0b3a-411a-bea9-6a1b5f37f4b6 is in state STARTED 2025-10-08 16:03:15.613738 | orchestrator | 2025-10-08 16:03:15 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:03:18.638287 | orchestrator | 2025-10-08 16:03:18 | INFO  | Task c9acb2ec-d69c-41f2-b09a-cfb2bdbcdae2 is in state STARTED 2025-10-08 16:03:18.639662 | orchestrator | 2025-10-08 16:03:18 | INFO  | Task 9743aaeb-e817-41b7-b0b1-71e5050ad45c is in state STARTED 2025-10-08 16:03:18.640964 | orchestrator | 2025-10-08 16:03:18 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 16:03:18.642283 | orchestrator | 2025-10-08 16:03:18 | INFO  | Task 231d3cfe-0b3a-411a-bea9-6a1b5f37f4b6 is in state STARTED 2025-10-08 16:03:18.642300 | orchestrator | 2025-10-08 16:03:18 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:03:21.695326 | orchestrator | 2025-10-08 16:03:21 | INFO  | Task c9acb2ec-d69c-41f2-b09a-cfb2bdbcdae2 is in state STARTED 2025-10-08 16:03:21.697287 | orchestrator | 2025-10-08 16:03:21 | INFO  | Task 9743aaeb-e817-41b7-b0b1-71e5050ad45c is in state STARTED 2025-10-08 16:03:21.698553 | orchestrator | 2025-10-08 16:03:21 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 16:03:21.700298 | orchestrator | 2025-10-08 16:03:21 | INFO  | Task 231d3cfe-0b3a-411a-bea9-6a1b5f37f4b6 is in state STARTED 2025-10-08 16:03:21.700324 | orchestrator | 2025-10-08 16:03:21 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:03:24.744702 | orchestrator | 2025-10-08 16:03:24 | INFO  | Task c9acb2ec-d69c-41f2-b09a-cfb2bdbcdae2 is in state STARTED 2025-10-08 16:03:24.745637 | orchestrator | 2025-10-08 16:03:24 | INFO  | Task 9743aaeb-e817-41b7-b0b1-71e5050ad45c is in state STARTED 2025-10-08 16:03:24.748219 | orchestrator | 2025-10-08 16:03:24 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 16:03:24.750770 | orchestrator | 2025-10-08 16:03:24 | INFO  | Task 231d3cfe-0b3a-411a-bea9-6a1b5f37f4b6 is in state SUCCESS 2025-10-08 16:03:24.753237 | orchestrator | 2025-10-08 16:03:24.753270 | orchestrator | 2025-10-08 16:03:24.753282 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-10-08 16:03:24.753295 | orchestrator | 2025-10-08 16:03:24.753306 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-10-08 16:03:24.753318 | orchestrator | Wednesday 08 October 2025 16:01:25 +0000 (0:00:00.285) 0:00:00.285 ***** 2025-10-08 16:03:24.753329 | orchestrator | ok: [testbed-node-0] 2025-10-08 16:03:24.753341 | orchestrator | ok: [testbed-node-1] 2025-10-08 16:03:24.753352 | orchestrator | ok: [testbed-node-2] 2025-10-08 16:03:24.753364 | orchestrator | 2025-10-08 16:03:24.753647 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-10-08 16:03:24.753660 | orchestrator | Wednesday 08 October 2025 16:01:25 +0000 (0:00:00.361) 0:00:00.647 ***** 2025-10-08 16:03:24.753672 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2025-10-08 16:03:24.753684 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2025-10-08 16:03:24.753696 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2025-10-08 16:03:24.753707 | orchestrator | 2025-10-08 16:03:24.753718 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2025-10-08 16:03:24.753730 | orchestrator | 2025-10-08 16:03:24.753741 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-10-08 16:03:24.753753 | orchestrator | Wednesday 08 October 2025 16:01:26 +0000 (0:00:00.569) 0:00:01.217 ***** 2025-10-08 16:03:24.753764 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-08 16:03:24.753776 | orchestrator | 2025-10-08 16:03:24.753803 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2025-10-08 16:03:24.753815 | orchestrator | Wednesday 08 October 2025 16:01:26 +0000 (0:00:00.609) 0:00:01.826 ***** 2025-10-08 16:03:24.753827 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2025-10-08 16:03:24.753838 | orchestrator | 2025-10-08 16:03:24.753849 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2025-10-08 16:03:24.753860 | orchestrator | Wednesday 08 October 2025 16:01:30 +0000 (0:00:03.580) 0:00:05.406 ***** 2025-10-08 16:03:24.753871 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2025-10-08 16:03:24.753883 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2025-10-08 16:03:24.753895 | orchestrator | 2025-10-08 16:03:24.753906 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2025-10-08 16:03:24.753917 | orchestrator | Wednesday 08 October 2025 16:01:37 +0000 (0:00:07.229) 0:00:12.636 ***** 2025-10-08 16:03:24.753929 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-10-08 16:03:24.753941 | orchestrator | 2025-10-08 16:03:24.753953 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2025-10-08 16:03:24.753964 | orchestrator | Wednesday 08 October 2025 16:01:41 +0000 (0:00:03.514) 0:00:16.151 ***** 2025-10-08 16:03:24.753975 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-10-08 16:03:24.753987 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2025-10-08 16:03:24.753998 | orchestrator | 2025-10-08 16:03:24.754009 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2025-10-08 16:03:24.754085 | orchestrator | Wednesday 08 October 2025 16:01:45 +0000 (0:00:03.975) 0:00:20.127 ***** 2025-10-08 16:03:24.754113 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-10-08 16:03:24.754125 | orchestrator | 2025-10-08 16:03:24.754136 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2025-10-08 16:03:24.754170 | orchestrator | Wednesday 08 October 2025 16:01:48 +0000 (0:00:03.763) 0:00:23.890 ***** 2025-10-08 16:03:24.754182 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2025-10-08 16:03:24.754193 | orchestrator | 2025-10-08 16:03:24.754204 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2025-10-08 16:03:24.754214 | orchestrator | Wednesday 08 October 2025 16:01:53 +0000 (0:00:04.379) 0:00:28.270 ***** 2025-10-08 16:03:24.754225 | orchestrator | changed: [testbed-node-0] 2025-10-08 16:03:24.754236 | orchestrator | 2025-10-08 16:03:24.754247 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2025-10-08 16:03:24.754260 | orchestrator | Wednesday 08 October 2025 16:01:57 +0000 (0:00:03.783) 0:00:32.053 ***** 2025-10-08 16:03:24.754273 | orchestrator | changed: [testbed-node-0] 2025-10-08 16:03:24.754285 | orchestrator | 2025-10-08 16:03:24.754297 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2025-10-08 16:03:24.754310 | orchestrator | Wednesday 08 October 2025 16:02:01 +0000 (0:00:04.257) 0:00:36.311 ***** 2025-10-08 16:03:24.754322 | orchestrator | changed: [testbed-node-0] 2025-10-08 16:03:24.754334 | orchestrator | 2025-10-08 16:03:24.754347 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2025-10-08 16:03:24.754359 | orchestrator | Wednesday 08 October 2025 16:02:05 +0000 (0:00:03.974) 0:00:40.285 ***** 2025-10-08 16:03:24.754388 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-10-08 16:03:24.754406 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-10-08 16:03:24.754426 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-10-08 16:03:24.754449 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-10-08 16:03:24.754464 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-10-08 16:03:24.754487 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-10-08 16:03:24.754500 | orchestrator | 2025-10-08 16:03:24.754514 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2025-10-08 16:03:24.754526 | orchestrator | Wednesday 08 October 2025 16:02:06 +0000 (0:00:01.431) 0:00:41.717 ***** 2025-10-08 16:03:24.754539 | orchestrator | skipping: [testbed-node-0] 2025-10-08 16:03:24.754551 | orchestrator | 2025-10-08 16:03:24.754563 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2025-10-08 16:03:24.754576 | orchestrator | Wednesday 08 October 2025 16:02:06 +0000 (0:00:00.132) 0:00:41.850 ***** 2025-10-08 16:03:24.754588 | orchestrator | skipping: [testbed-node-0] 2025-10-08 16:03:24.754601 | orchestrator | skipping: [testbed-node-1] 2025-10-08 16:03:24.754612 | orchestrator | skipping: [testbed-node-2] 2025-10-08 16:03:24.754623 | orchestrator | 2025-10-08 16:03:24.754634 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2025-10-08 16:03:24.754645 | orchestrator | Wednesday 08 October 2025 16:02:07 +0000 (0:00:00.603) 0:00:42.453 ***** 2025-10-08 16:03:24.754656 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-10-08 16:03:24.754666 | orchestrator | 2025-10-08 16:03:24.754677 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2025-10-08 16:03:24.754688 | orchestrator | Wednesday 08 October 2025 16:02:08 +0000 (0:00:00.908) 0:00:43.361 ***** 2025-10-08 16:03:24.754705 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-10-08 16:03:24.754724 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-10-08 16:03:24.754736 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-10-08 16:03:24.754757 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-10-08 16:03:24.754769 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-10-08 16:03:24.754793 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-10-08 16:03:24.754804 | orchestrator | 2025-10-08 16:03:24.754815 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2025-10-08 16:03:24.754827 | orchestrator | Wednesday 08 October 2025 16:02:10 +0000 (0:00:02.261) 0:00:45.623 ***** 2025-10-08 16:03:24.754838 | orchestrator | ok: [testbed-node-0] 2025-10-08 16:03:24.754849 | orchestrator | ok: [testbed-node-1] 2025-10-08 16:03:24.754860 | orchestrator | ok: [testbed-node-2] 2025-10-08 16:03:24.754870 | orchestrator | 2025-10-08 16:03:24.754881 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-10-08 16:03:24.754892 | orchestrator | Wednesday 08 October 2025 16:02:10 +0000 (0:00:00.311) 0:00:45.935 ***** 2025-10-08 16:03:24.754903 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-08 16:03:24.754914 | orchestrator | 2025-10-08 16:03:24.754925 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2025-10-08 16:03:24.754936 | orchestrator | Wednesday 08 October 2025 16:02:11 +0000 (0:00:00.724) 0:00:46.660 ***** 2025-10-08 16:03:24.754947 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-10-08 16:03:24.754967 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-10-08 16:03:24.754980 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-10-08 16:03:24.755002 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-10-08 16:03:24.755014 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-10-08 16:03:24.755026 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-10-08 16:03:24.755038 | orchestrator | 2025-10-08 16:03:24.755049 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2025-10-08 16:03:24.755060 | orchestrator | Wednesday 08 October 2025 16:02:14 +0000 (0:00:02.445) 0:00:49.106 ***** 2025-10-08 16:03:24.755078 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-10-08 16:03:24.755096 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-10-08 16:03:24.755108 | orchestrator | skipping: [testbed-node-0] 2025-10-08 16:03:24.755125 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-10-08 16:03:24.755137 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-10-08 16:03:24.755166 | orchestrator | skipping: [testbed-node-1] 2025-10-08 16:03:24.755179 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-10-08 16:03:24.755198 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-10-08 16:03:24.755216 | orchestrator | skipping: [testbed-node-2] 2025-10-08 16:03:24.755227 | orchestrator | 2025-10-08 16:03:24.755239 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2025-10-08 16:03:24.755250 | orchestrator | Wednesday 08 October 2025 16:02:14 +0000 (0:00:00.695) 0:00:49.802 ***** 2025-10-08 16:03:24.755272 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-10-08 16:03:24.755284 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-10-08 16:03:24.755296 | orchestrator | skipping: [testbed-node-0] 2025-10-08 16:03:24.755308 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-10-08 16:03:24.755319 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-10-08 16:03:24.755331 | orchestrator | skipping: [testbed-node-1] 2025-10-08 16:03:24.755351 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-10-08 16:03:24.755374 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-10-08 16:03:24.755387 | orchestrator | skipping: [testbed-node-2] 2025-10-08 16:03:24.755398 | orchestrator | 2025-10-08 16:03:24.755409 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2025-10-08 16:03:24.755420 | orchestrator | Wednesday 08 October 2025 16:02:15 +0000 (0:00:01.097) 0:00:50.900 ***** 2025-10-08 16:03:24.755431 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-10-08 16:03:24.755443 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-10-08 16:03:24.755461 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-10-08 16:03:24.755479 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-10-08 16:03:24.755495 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-10-08 16:03:24.755507 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-10-08 16:03:24.755518 | orchestrator | 2025-10-08 16:03:24.755529 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2025-10-08 16:03:24.755540 | orchestrator | Wednesday 08 October 2025 16:02:18 +0000 (0:00:02.809) 0:00:53.710 ***** 2025-10-08 16:03:24.755552 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-10-08 16:03:24.755576 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-10-08 16:03:24.755588 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-10-08 16:03:24.755604 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-10-08 16:03:24.755616 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-10-08 16:03:24.755628 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-10-08 16:03:24.755646 | orchestrator | 2025-10-08 16:03:24.755657 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2025-10-08 16:03:24.755668 | orchestrator | Wednesday 08 October 2025 16:02:27 +0000 (0:00:08.443) 0:01:02.154 ***** 2025-10-08 16:03:24.755686 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-10-08 16:03:24.755698 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-10-08 16:03:24.755709 | orchestrator | skipping: [testbed-node-0] 2025-10-08 16:03:24.755726 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-10-08 16:03:24.755738 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-10-08 16:03:24.755749 | orchestrator | skipping: [testbed-node-2] 2025-10-08 16:03:24.755761 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-10-08 16:03:24.755784 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-10-08 16:03:24.755796 | orchestrator | skipping: [testbed-node-1] 2025-10-08 16:03:24.755807 | orchestrator | 2025-10-08 16:03:24.755818 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2025-10-08 16:03:24.755829 | orchestrator | Wednesday 08 October 2025 16:02:27 +0000 (0:00:00.572) 0:01:02.727 ***** 2025-10-08 16:03:24.755845 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-10-08 16:03:24.755858 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-10-08 16:03:24.755869 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-10-08 16:03:24.755889 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-10-08 16:03:24.755907 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-10-08 16:03:24.755956 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-10-08 16:03:24.755969 | orchestrator | 2025-10-08 16:03:24.755980 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-10-08 16:03:24.755992 | orchestrator | Wednesday 08 October 2025 16:02:29 +0000 (0:00:02.173) 0:01:04.900 ***** 2025-10-08 16:03:24.756003 | orchestrator | skipping: [testbed-node-0] 2025-10-08 16:03:24.756014 | orchestrator | skipping: [testbed-node-1] 2025-10-08 16:03:24.756025 | orchestrator | skipping: [testbed-node-2] 2025-10-08 16:03:24.756036 | orchestrator | 2025-10-08 16:03:24.756047 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2025-10-08 16:03:24.756058 | orchestrator | Wednesday 08 October 2025 16:02:30 +0000 (0:00:00.282) 0:01:05.182 ***** 2025-10-08 16:03:24.756069 | orchestrator | changed: [testbed-node-0] 2025-10-08 16:03:24.756080 | orchestrator | 2025-10-08 16:03:24.756091 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2025-10-08 16:03:24.756101 | orchestrator | Wednesday 08 October 2025 16:02:32 +0000 (0:00:02.327) 0:01:07.510 ***** 2025-10-08 16:03:24.756112 | orchestrator | changed: [testbed-node-0] 2025-10-08 16:03:24.756123 | orchestrator | 2025-10-08 16:03:24.756134 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2025-10-08 16:03:24.756174 | orchestrator | Wednesday 08 October 2025 16:02:34 +0000 (0:00:02.283) 0:01:09.793 ***** 2025-10-08 16:03:24.756185 | orchestrator | changed: [testbed-node-0] 2025-10-08 16:03:24.756204 | orchestrator | 2025-10-08 16:03:24.756215 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-10-08 16:03:24.756226 | orchestrator | Wednesday 08 October 2025 16:02:51 +0000 (0:00:16.775) 0:01:26.569 ***** 2025-10-08 16:03:24.756236 | orchestrator | 2025-10-08 16:03:24.756247 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-10-08 16:03:24.756258 | orchestrator | Wednesday 08 October 2025 16:02:51 +0000 (0:00:00.108) 0:01:26.677 ***** 2025-10-08 16:03:24.756268 | orchestrator | 2025-10-08 16:03:24.756279 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-10-08 16:03:24.756290 | orchestrator | Wednesday 08 October 2025 16:02:51 +0000 (0:00:00.119) 0:01:26.796 ***** 2025-10-08 16:03:24.756301 | orchestrator | 2025-10-08 16:03:24.756312 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2025-10-08 16:03:24.756322 | orchestrator | Wednesday 08 October 2025 16:02:51 +0000 (0:00:00.155) 0:01:26.952 ***** 2025-10-08 16:03:24.756333 | orchestrator | changed: [testbed-node-0] 2025-10-08 16:03:24.756344 | orchestrator | changed: [testbed-node-1] 2025-10-08 16:03:24.756355 | orchestrator | changed: [testbed-node-2] 2025-10-08 16:03:24.756366 | orchestrator | 2025-10-08 16:03:24.756376 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2025-10-08 16:03:24.756387 | orchestrator | Wednesday 08 October 2025 16:03:12 +0000 (0:00:20.975) 0:01:47.928 ***** 2025-10-08 16:03:24.756398 | orchestrator | changed: [testbed-node-0] 2025-10-08 16:03:24.756409 | orchestrator | changed: [testbed-node-1] 2025-10-08 16:03:24.756420 | orchestrator | changed: [testbed-node-2] 2025-10-08 16:03:24.756431 | orchestrator | 2025-10-08 16:03:24.756442 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-08 16:03:24.756453 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-10-08 16:03:24.756465 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-10-08 16:03:24.756476 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-10-08 16:03:24.756487 | orchestrator | 2025-10-08 16:03:24.756498 | orchestrator | 2025-10-08 16:03:24.756509 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-08 16:03:24.756520 | orchestrator | Wednesday 08 October 2025 16:03:24 +0000 (0:00:11.184) 0:01:59.112 ***** 2025-10-08 16:03:24.756531 | orchestrator | =============================================================================== 2025-10-08 16:03:24.756542 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 20.98s 2025-10-08 16:03:24.756560 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 16.78s 2025-10-08 16:03:24.756571 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 11.18s 2025-10-08 16:03:24.756582 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 8.44s 2025-10-08 16:03:24.756593 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 7.23s 2025-10-08 16:03:24.756604 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 4.38s 2025-10-08 16:03:24.756614 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 4.26s 2025-10-08 16:03:24.756625 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 3.98s 2025-10-08 16:03:24.756636 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.97s 2025-10-08 16:03:24.756647 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.78s 2025-10-08 16:03:24.756658 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.76s 2025-10-08 16:03:24.756669 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.58s 2025-10-08 16:03:24.756687 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.51s 2025-10-08 16:03:24.756698 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.81s 2025-10-08 16:03:24.756709 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.45s 2025-10-08 16:03:24.756728 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.33s 2025-10-08 16:03:24.756739 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.28s 2025-10-08 16:03:24.756750 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.26s 2025-10-08 16:03:24.756761 | orchestrator | magnum : Check magnum containers ---------------------------------------- 2.17s 2025-10-08 16:03:24.756772 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 1.43s 2025-10-08 16:03:24.756783 | orchestrator | 2025-10-08 16:03:24 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:03:27.797991 | orchestrator | 2025-10-08 16:03:27 | INFO  | Task c9acb2ec-d69c-41f2-b09a-cfb2bdbcdae2 is in state STARTED 2025-10-08 16:03:27.799171 | orchestrator | 2025-10-08 16:03:27 | INFO  | Task 9743aaeb-e817-41b7-b0b1-71e5050ad45c is in state STARTED 2025-10-08 16:03:27.800770 | orchestrator | 2025-10-08 16:03:27 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 16:03:27.801706 | orchestrator | 2025-10-08 16:03:27 | INFO  | Task 56592c42-3665-409f-9b9e-6d457639984e is in state STARTED 2025-10-08 16:03:27.801731 | orchestrator | 2025-10-08 16:03:27 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:03:30.839744 | orchestrator | 2025-10-08 16:03:30 | INFO  | Task c9acb2ec-d69c-41f2-b09a-cfb2bdbcdae2 is in state STARTED 2025-10-08 16:03:30.841265 | orchestrator | 2025-10-08 16:03:30 | INFO  | Task 9743aaeb-e817-41b7-b0b1-71e5050ad45c is in state STARTED 2025-10-08 16:03:30.841578 | orchestrator | 2025-10-08 16:03:30 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 16:03:30.843371 | orchestrator | 2025-10-08 16:03:30 | INFO  | Task 56592c42-3665-409f-9b9e-6d457639984e is in state SUCCESS 2025-10-08 16:03:30.843400 | orchestrator | 2025-10-08 16:03:30 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:03:33.886765 | orchestrator | 2025-10-08 16:03:33 | INFO  | Task c9acb2ec-d69c-41f2-b09a-cfb2bdbcdae2 is in state STARTED 2025-10-08 16:03:33.887218 | orchestrator | 2025-10-08 16:03:33 | INFO  | Task 9743aaeb-e817-41b7-b0b1-71e5050ad45c is in state STARTED 2025-10-08 16:03:33.888207 | orchestrator | 2025-10-08 16:03:33 | INFO  | Task 7b93d838-8799-4349-ac0d-2e6e54b93fcc is in state STARTED 2025-10-08 16:03:33.889086 | orchestrator | 2025-10-08 16:03:33 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state STARTED 2025-10-08 16:03:33.889337 | orchestrator | 2025-10-08 16:03:33 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:03:36.941919 | orchestrator | 2025-10-08 16:03:36 | INFO  | Task c9acb2ec-d69c-41f2-b09a-cfb2bdbcdae2 is in state STARTED 2025-10-08 16:03:36.944050 | orchestrator | 2025-10-08 16:03:36 | INFO  | Task 9743aaeb-e817-41b7-b0b1-71e5050ad45c is in state STARTED 2025-10-08 16:03:36.945759 | orchestrator | 2025-10-08 16:03:36 | INFO  | Task 7b93d838-8799-4349-ac0d-2e6e54b93fcc is in state STARTED 2025-10-08 16:03:36.951273 | orchestrator | 2025-10-08 16:03:36 | INFO  | Task 7230a695-49db-41d5-9c27-ff97b9098b74 is in state SUCCESS 2025-10-08 16:03:36.952915 | orchestrator | 2025-10-08 16:03:36.952949 | orchestrator | 2025-10-08 16:03:36.952962 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-10-08 16:03:36.952975 | orchestrator | 2025-10-08 16:03:36.952987 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-10-08 16:03:36.953000 | orchestrator | Wednesday 08 October 2025 16:03:28 +0000 (0:00:00.172) 0:00:00.172 ***** 2025-10-08 16:03:36.953042 | orchestrator | ok: [testbed-node-0] 2025-10-08 16:03:36.953055 | orchestrator | ok: [testbed-node-1] 2025-10-08 16:03:36.953067 | orchestrator | ok: [testbed-node-2] 2025-10-08 16:03:36.953078 | orchestrator | 2025-10-08 16:03:36.953090 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-10-08 16:03:36.953101 | orchestrator | Wednesday 08 October 2025 16:03:28 +0000 (0:00:00.303) 0:00:00.476 ***** 2025-10-08 16:03:36.953113 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2025-10-08 16:03:36.953125 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2025-10-08 16:03:36.953137 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2025-10-08 16:03:36.953190 | orchestrator | 2025-10-08 16:03:36.953203 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2025-10-08 16:03:36.953214 | orchestrator | 2025-10-08 16:03:36.953225 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2025-10-08 16:03:36.953236 | orchestrator | Wednesday 08 October 2025 16:03:29 +0000 (0:00:00.785) 0:00:01.262 ***** 2025-10-08 16:03:36.953247 | orchestrator | ok: [testbed-node-0] 2025-10-08 16:03:36.953258 | orchestrator | ok: [testbed-node-1] 2025-10-08 16:03:36.953269 | orchestrator | ok: [testbed-node-2] 2025-10-08 16:03:36.953280 | orchestrator | 2025-10-08 16:03:36.953291 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-08 16:03:36.953304 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-08 16:03:36.953318 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-08 16:03:36.953349 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-08 16:03:36.953360 | orchestrator | 2025-10-08 16:03:36.953371 | orchestrator | 2025-10-08 16:03:36.953383 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-08 16:03:36.953393 | orchestrator | Wednesday 08 October 2025 16:03:30 +0000 (0:00:00.709) 0:00:01.972 ***** 2025-10-08 16:03:36.953404 | orchestrator | =============================================================================== 2025-10-08 16:03:36.953415 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.79s 2025-10-08 16:03:36.953426 | orchestrator | Waiting for Nova public port to be UP ----------------------------------- 0.71s 2025-10-08 16:03:36.953437 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.30s 2025-10-08 16:03:36.953448 | orchestrator | 2025-10-08 16:03:36.953459 | orchestrator | 2025-10-08 16:03:36.953470 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-10-08 16:03:36.953481 | orchestrator | 2025-10-08 16:03:36.953491 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2025-10-08 16:03:36.953505 | orchestrator | Wednesday 08 October 2025 15:54:31 +0000 (0:00:00.219) 0:00:00.219 ***** 2025-10-08 16:03:36.953519 | orchestrator | changed: [testbed-manager] 2025-10-08 16:03:36.953532 | orchestrator | changed: [testbed-node-0] 2025-10-08 16:03:36.953545 | orchestrator | changed: [testbed-node-1] 2025-10-08 16:03:36.953557 | orchestrator | changed: [testbed-node-2] 2025-10-08 16:03:36.953569 | orchestrator | changed: [testbed-node-3] 2025-10-08 16:03:36.953581 | orchestrator | changed: [testbed-node-4] 2025-10-08 16:03:36.953594 | orchestrator | changed: [testbed-node-5] 2025-10-08 16:03:36.953606 | orchestrator | 2025-10-08 16:03:36.953618 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-10-08 16:03:36.953630 | orchestrator | Wednesday 08 October 2025 15:54:32 +0000 (0:00:00.876) 0:00:01.096 ***** 2025-10-08 16:03:36.953643 | orchestrator | changed: [testbed-manager] 2025-10-08 16:03:36.953655 | orchestrator | changed: [testbed-node-0] 2025-10-08 16:03:36.953667 | orchestrator | changed: [testbed-node-1] 2025-10-08 16:03:36.953678 | orchestrator | changed: [testbed-node-2] 2025-10-08 16:03:36.953705 | orchestrator | changed: [testbed-node-3] 2025-10-08 16:03:36.953718 | orchestrator | changed: [testbed-node-4] 2025-10-08 16:03:36.953730 | orchestrator | changed: [testbed-node-5] 2025-10-08 16:03:36.953742 | orchestrator | 2025-10-08 16:03:36.953754 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-10-08 16:03:36.953767 | orchestrator | Wednesday 08 October 2025 15:54:32 +0000 (0:00:00.601) 0:00:01.697 ***** 2025-10-08 16:03:36.953779 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2025-10-08 16:03:36.953792 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2025-10-08 16:03:36.953805 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2025-10-08 16:03:36.953817 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2025-10-08 16:03:36.953828 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2025-10-08 16:03:36.953841 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2025-10-08 16:03:36.953854 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2025-10-08 16:03:36.953865 | orchestrator | 2025-10-08 16:03:36.953876 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2025-10-08 16:03:36.953886 | orchestrator | 2025-10-08 16:03:36.953897 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-10-08 16:03:36.953908 | orchestrator | Wednesday 08 October 2025 15:54:33 +0000 (0:00:00.822) 0:00:02.520 ***** 2025-10-08 16:03:36.953919 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-08 16:03:36.953930 | orchestrator | 2025-10-08 16:03:36.953941 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2025-10-08 16:03:36.953952 | orchestrator | Wednesday 08 October 2025 15:54:34 +0000 (0:00:00.587) 0:00:03.107 ***** 2025-10-08 16:03:36.953963 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2025-10-08 16:03:36.953987 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2025-10-08 16:03:36.953998 | orchestrator | 2025-10-08 16:03:36.954009 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2025-10-08 16:03:36.954069 | orchestrator | Wednesday 08 October 2025 15:54:38 +0000 (0:00:03.845) 0:00:06.952 ***** 2025-10-08 16:03:36.954081 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-10-08 16:03:36.954092 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-10-08 16:03:36.954103 | orchestrator | changed: [testbed-node-0] 2025-10-08 16:03:36.954114 | orchestrator | 2025-10-08 16:03:36.954125 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-10-08 16:03:36.954136 | orchestrator | Wednesday 08 October 2025 15:54:41 +0000 (0:00:03.836) 0:00:10.789 ***** 2025-10-08 16:03:36.954162 | orchestrator | changed: [testbed-node-0] 2025-10-08 16:03:36.954174 | orchestrator | 2025-10-08 16:03:36.954185 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2025-10-08 16:03:36.954196 | orchestrator | Wednesday 08 October 2025 15:54:42 +0000 (0:00:01.017) 0:00:11.806 ***** 2025-10-08 16:03:36.954206 | orchestrator | changed: [testbed-node-0] 2025-10-08 16:03:36.954217 | orchestrator | 2025-10-08 16:03:36.954228 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2025-10-08 16:03:36.954239 | orchestrator | Wednesday 08 October 2025 15:54:44 +0000 (0:00:01.502) 0:00:13.309 ***** 2025-10-08 16:03:36.954250 | orchestrator | changed: [testbed-node-0] 2025-10-08 16:03:36.954261 | orchestrator | 2025-10-08 16:03:36.954272 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-10-08 16:03:36.954283 | orchestrator | Wednesday 08 October 2025 15:54:47 +0000 (0:00:02.675) 0:00:15.985 ***** 2025-10-08 16:03:36.954294 | orchestrator | skipping: [testbed-node-0] 2025-10-08 16:03:36.954305 | orchestrator | skipping: [testbed-node-1] 2025-10-08 16:03:36.954315 | orchestrator | skipping: [testbed-node-2] 2025-10-08 16:03:36.954326 | orchestrator | 2025-10-08 16:03:36.954337 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-10-08 16:03:36.954348 | orchestrator | Wednesday 08 October 2025 15:54:47 +0000 (0:00:00.351) 0:00:16.336 ***** 2025-10-08 16:03:36.954373 | orchestrator | ok: [testbed-node-0] 2025-10-08 16:03:36.954384 | orchestrator | 2025-10-08 16:03:36.954395 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2025-10-08 16:03:36.954407 | orchestrator | Wednesday 08 October 2025 15:55:17 +0000 (0:00:30.367) 0:00:46.704 ***** 2025-10-08 16:03:36.954417 | orchestrator | changed: [testbed-node-0] 2025-10-08 16:03:36.954428 | orchestrator | 2025-10-08 16:03:36.954439 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-10-08 16:03:36.954450 | orchestrator | Wednesday 08 October 2025 15:55:33 +0000 (0:00:16.008) 0:01:02.713 ***** 2025-10-08 16:03:36.954855 | orchestrator | ok: [testbed-node-0] 2025-10-08 16:03:36.954874 | orchestrator | 2025-10-08 16:03:36.954886 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-10-08 16:03:36.954898 | orchestrator | Wednesday 08 October 2025 15:55:49 +0000 (0:00:15.223) 0:01:17.936 ***** 2025-10-08 16:03:36.954909 | orchestrator | ok: [testbed-node-0] 2025-10-08 16:03:36.954921 | orchestrator | 2025-10-08 16:03:36.954932 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2025-10-08 16:03:36.954944 | orchestrator | Wednesday 08 October 2025 15:55:50 +0000 (0:00:01.277) 0:01:19.214 ***** 2025-10-08 16:03:36.954955 | orchestrator | skipping: [testbed-node-0] 2025-10-08 16:03:36.954966 | orchestrator | 2025-10-08 16:03:36.954978 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-10-08 16:03:36.954990 | orchestrator | Wednesday 08 October 2025 15:55:50 +0000 (0:00:00.485) 0:01:19.699 ***** 2025-10-08 16:03:36.955001 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-08 16:03:36.955013 | orchestrator | 2025-10-08 16:03:36.955024 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-10-08 16:03:36.955036 | orchestrator | Wednesday 08 October 2025 15:55:51 +0000 (0:00:00.543) 0:01:20.242 ***** 2025-10-08 16:03:36.955047 | orchestrator | ok: [testbed-node-0] 2025-10-08 16:03:36.955058 | orchestrator | 2025-10-08 16:03:36.955070 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-10-08 16:03:36.955081 | orchestrator | Wednesday 08 October 2025 15:56:11 +0000 (0:00:20.299) 0:01:40.542 ***** 2025-10-08 16:03:36.955093 | orchestrator | skipping: [testbed-node-0] 2025-10-08 16:03:36.955104 | orchestrator | skipping: [testbed-node-1] 2025-10-08 16:03:36.955116 | orchestrator | skipping: [testbed-node-2] 2025-10-08 16:03:36.955127 | orchestrator | 2025-10-08 16:03:36.955139 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2025-10-08 16:03:36.955177 | orchestrator | 2025-10-08 16:03:36.955189 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-10-08 16:03:36.955200 | orchestrator | Wednesday 08 October 2025 15:56:11 +0000 (0:00:00.331) 0:01:40.873 ***** 2025-10-08 16:03:36.955212 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-08 16:03:36.955223 | orchestrator | 2025-10-08 16:03:36.955235 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2025-10-08 16:03:36.955246 | orchestrator | Wednesday 08 October 2025 15:56:12 +0000 (0:00:00.585) 0:01:41.458 ***** 2025-10-08 16:03:36.955257 | orchestrator | skipping: [testbed-node-1] 2025-10-08 16:03:36.955269 | orchestrator | skipping: [testbed-node-2] 2025-10-08 16:03:36.955280 | orchestrator | changed: [testbed-node-0] 2025-10-08 16:03:36.955291 | orchestrator | 2025-10-08 16:03:36.955303 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2025-10-08 16:03:36.955314 | orchestrator | Wednesday 08 October 2025 15:56:14 +0000 (0:00:02.208) 0:01:43.667 ***** 2025-10-08 16:03:36.955325 | orchestrator | skipping: [testbed-node-1] 2025-10-08 16:03:36.955337 | orchestrator | skipping: [testbed-node-2] 2025-10-08 16:03:36.955348 | orchestrator | changed: [testbed-node-0] 2025-10-08 16:03:36.955359 | orchestrator | 2025-10-08 16:03:36.955371 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-10-08 16:03:36.955383 | orchestrator | Wednesday 08 October 2025 15:56:17 +0000 (0:00:02.844) 0:01:46.511 ***** 2025-10-08 16:03:36.955405 | orchestrator | skipping: [testbed-node-0] 2025-10-08 16:03:36.955417 | orchestrator | skipping: [testbed-node-1] 2025-10-08 16:03:36.955438 | orchestrator | skipping: [testbed-node-2] 2025-10-08 16:03:36.955449 | orchestrator | 2025-10-08 16:03:36.955461 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-10-08 16:03:36.955472 | orchestrator | Wednesday 08 October 2025 15:56:18 +0000 (0:00:00.704) 0:01:47.216 ***** 2025-10-08 16:03:36.955486 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-10-08 16:03:36.955499 | orchestrator | skipping: [testbed-node-1] 2025-10-08 16:03:36.955512 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-10-08 16:03:36.955525 | orchestrator | skipping: [testbed-node-2] 2025-10-08 16:03:36.955538 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-10-08 16:03:36.955634 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2025-10-08 16:03:36.955648 | orchestrator | 2025-10-08 16:03:36.956063 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-10-08 16:03:36.956076 | orchestrator | Wednesday 08 October 2025 15:56:26 +0000 (0:00:08.594) 0:01:55.811 ***** 2025-10-08 16:03:36.956088 | orchestrator | skipping: [testbed-node-0] 2025-10-08 16:03:36.956099 | orchestrator | skipping: [testbed-node-1] 2025-10-08 16:03:36.956110 | orchestrator | skipping: [testbed-node-2] 2025-10-08 16:03:36.956121 | orchestrator | 2025-10-08 16:03:36.956132 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-10-08 16:03:36.956143 | orchestrator | Wednesday 08 October 2025 15:56:27 +0000 (0:00:00.396) 0:01:56.208 ***** 2025-10-08 16:03:36.956210 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-10-08 16:03:36.956222 | orchestrator | skipping: [testbed-node-0] 2025-10-08 16:03:36.956233 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-10-08 16:03:36.956244 | orchestrator | skipping: [testbed-node-1] 2025-10-08 16:03:36.956255 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-10-08 16:03:36.956266 | orchestrator | skipping: [testbed-node-2] 2025-10-08 16:03:36.956277 | orchestrator | 2025-10-08 16:03:36.956288 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-10-08 16:03:36.956299 | orchestrator | Wednesday 08 October 2025 15:56:27 +0000 (0:00:00.719) 0:01:56.928 ***** 2025-10-08 16:03:36.956310 | orchestrator | skipping: [testbed-node-1] 2025-10-08 16:03:36.956331 | orchestrator | skipping: [testbed-node-2] 2025-10-08 16:03:36.956342 | orchestrator | changed: [testbed-node-0] 2025-10-08 16:03:36.956353 | orchestrator | 2025-10-08 16:03:36.956364 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2025-10-08 16:03:36.956376 | orchestrator | Wednesday 08 October 2025 15:56:28 +0000 (0:00:00.702) 0:01:57.630 ***** 2025-10-08 16:03:36.956386 | orchestrator | skipping: [testbed-node-1] 2025-10-08 16:03:36.956398 | orchestrator | skipping: [testbed-node-2] 2025-10-08 16:03:36.956408 | orchestrator | changed: [testbed-node-0] 2025-10-08 16:03:36.956419 | orchestrator | 2025-10-08 16:03:36.956430 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2025-10-08 16:03:36.956441 | orchestrator | Wednesday 08 October 2025 15:56:29 +0000 (0:00:01.107) 0:01:58.737 ***** 2025-10-08 16:03:36.956452 | orchestrator | skipping: [testbed-node-1] 2025-10-08 16:03:36.956463 | orchestrator | skipping: [testbed-node-2] 2025-10-08 16:03:36.956474 | orchestrator | changed: [testbed-node-0] 2025-10-08 16:03:36.956485 | orchestrator | 2025-10-08 16:03:36.956496 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2025-10-08 16:03:36.956507 | orchestrator | Wednesday 08 October 2025 15:56:32 +0000 (0:00:02.436) 0:02:01.173 ***** 2025-10-08 16:03:36.956518 | orchestrator | skipping: [testbed-node-1] 2025-10-08 16:03:36.956529 | orchestrator | skipping: [testbed-node-2] 2025-10-08 16:03:36.956540 | orchestrator | ok: [testbed-node-0] 2025-10-08 16:03:36.956551 | orchestrator | 2025-10-08 16:03:36.956562 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-10-08 16:03:36.956584 | orchestrator | Wednesday 08 October 2025 15:56:55 +0000 (0:00:22.879) 0:02:24.053 ***** 2025-10-08 16:03:36.956596 | orchestrator | skipping: [testbed-node-2] 2025-10-08 16:03:36.956607 | orchestrator | skipping: [testbed-node-1] 2025-10-08 16:03:36.956617 | orchestrator | ok: [testbed-node-0] 2025-10-08 16:03:36.956628 | orchestrator | 2025-10-08 16:03:36.956639 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-10-08 16:03:36.956650 | orchestrator | Wednesday 08 October 2025 15:57:08 +0000 (0:00:13.818) 0:02:37.871 ***** 2025-10-08 16:03:36.956663 | orchestrator | ok: [testbed-node-0] 2025-10-08 16:03:36.956675 | orchestrator | skipping: [testbed-node-1] 2025-10-08 16:03:36.956687 | orchestrator | skipping: [testbed-node-2] 2025-10-08 16:03:36.956699 | orchestrator | 2025-10-08 16:03:36.956712 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2025-10-08 16:03:36.956724 | orchestrator | Wednesday 08 October 2025 15:57:10 +0000 (0:00:01.088) 0:02:38.960 ***** 2025-10-08 16:03:36.956736 | orchestrator | skipping: [testbed-node-1] 2025-10-08 16:03:36.956748 | orchestrator | skipping: [testbed-node-2] 2025-10-08 16:03:36.956758 | orchestrator | changed: [testbed-node-0] 2025-10-08 16:03:36.956769 | orchestrator | 2025-10-08 16:03:36.956780 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2025-10-08 16:03:36.956791 | orchestrator | Wednesday 08 October 2025 15:57:23 +0000 (0:00:13.191) 0:02:52.152 ***** 2025-10-08 16:03:36.956802 | orchestrator | skipping: [testbed-node-0] 2025-10-08 16:03:36.956813 | orchestrator | skipping: [testbed-node-1] 2025-10-08 16:03:36.956824 | orchestrator | skipping: [testbed-node-2] 2025-10-08 16:03:36.956835 | orchestrator | 2025-10-08 16:03:36.956847 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-10-08 16:03:36.956856 | orchestrator | Wednesday 08 October 2025 15:57:24 +0000 (0:00:01.122) 0:02:53.274 ***** 2025-10-08 16:03:36.956866 | orchestrator | skipping: [testbed-node-0] 2025-10-08 16:03:36.956876 | orchestrator | skipping: [testbed-node-1] 2025-10-08 16:03:36.956885 | orchestrator | skipping: [testbed-node-2] 2025-10-08 16:03:36.956895 | orchestrator | 2025-10-08 16:03:36.956904 | orchestrator | PLAY [Apply role nova] ********************************************************* 2025-10-08 16:03:36.956914 | orchestrator | 2025-10-08 16:03:36.956924 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-10-08 16:03:36.956933 | orchestrator | Wednesday 08 October 2025 15:57:24 +0000 (0:00:00.525) 0:02:53.800 ***** 2025-10-08 16:03:36.956943 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-08 16:03:36.956954 | orchestrator | 2025-10-08 16:03:36.957001 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2025-10-08 16:03:36.957014 | orchestrator | Wednesday 08 October 2025 15:57:25 +0000 (0:00:00.619) 0:02:54.420 ***** 2025-10-08 16:03:36.957023 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2025-10-08 16:03:36.957033 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2025-10-08 16:03:36.957042 | orchestrator | 2025-10-08 16:03:36.957052 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2025-10-08 16:03:36.957062 | orchestrator | Wednesday 08 October 2025 15:57:28 +0000 (0:00:03.408) 0:02:57.828 ***** 2025-10-08 16:03:36.957072 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2025-10-08 16:03:36.957082 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2025-10-08 16:03:36.957092 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2025-10-08 16:03:36.957102 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2025-10-08 16:03:36.957111 | orchestrator | 2025-10-08 16:03:36.957735 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2025-10-08 16:03:36.957753 | orchestrator | Wednesday 08 October 2025 15:57:35 +0000 (0:00:07.027) 0:03:04.856 ***** 2025-10-08 16:03:36.957770 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-10-08 16:03:36.957778 | orchestrator | 2025-10-08 16:03:36.957786 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2025-10-08 16:03:36.957794 | orchestrator | Wednesday 08 October 2025 15:57:39 +0000 (0:00:03.263) 0:03:08.119 ***** 2025-10-08 16:03:36.957802 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-10-08 16:03:36.957810 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2025-10-08 16:03:36.957819 | orchestrator | 2025-10-08 16:03:36.957832 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2025-10-08 16:03:36.957841 | orchestrator | Wednesday 08 October 2025 15:57:43 +0000 (0:00:04.284) 0:03:12.404 ***** 2025-10-08 16:03:36.957848 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-10-08 16:03:36.957856 | orchestrator | 2025-10-08 16:03:36.957864 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2025-10-08 16:03:36.957872 | orchestrator | Wednesday 08 October 2025 15:57:47 +0000 (0:00:03.889) 0:03:16.293 ***** 2025-10-08 16:03:36.957880 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2025-10-08 16:03:36.957888 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2025-10-08 16:03:36.957896 | orchestrator | 2025-10-08 16:03:36.957904 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-10-08 16:03:36.957912 | orchestrator | Wednesday 08 October 2025 15:57:56 +0000 (0:00:09.052) 0:03:25.346 ***** 2025-10-08 16:03:36.957926 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-10-08 16:03:36.957941 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-10-08 16:03:36.958046 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-10-08 16:03:36.958076 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-10-08 16:03:36.958086 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-10-08 16:03:36.958095 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-10-08 16:03:36.958104 | orchestrator | 2025-10-08 16:03:36.958113 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2025-10-08 16:03:36.958121 | orchestrator | Wednesday 08 October 2025 15:57:58 +0000 (0:00:02.054) 0:03:27.400 ***** 2025-10-08 16:03:36.958129 | orchestrator | skipping: [testbed-node-0] 2025-10-08 16:03:36.958137 | orchestrator | 2025-10-08 16:03:36.958163 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2025-10-08 16:03:36.958171 | orchestrator | Wednesday 08 October 2025 15:57:58 +0000 (0:00:00.359) 0:03:27.759 ***** 2025-10-08 16:03:36.958179 | orchestrator | skipping: [testbed-node-0] 2025-10-08 16:03:36.958188 | orchestrator | skipping: [testbed-node-1] 2025-10-08 16:03:36.958196 | orchestrator | skipping: [testbed-node-2] 2025-10-08 16:03:36.958203 | orchestrator | 2025-10-08 16:03:36.958211 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2025-10-08 16:03:36.958219 | orchestrator | Wednesday 08 October 2025 15:57:59 +0000 (0:00:00.681) 0:03:28.441 ***** 2025-10-08 16:03:36.958260 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-10-08 16:03:36.958269 | orchestrator | 2025-10-08 16:03:36.958278 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2025-10-08 16:03:36.958285 | orchestrator | Wednesday 08 October 2025 15:58:01 +0000 (0:00:01.497) 0:03:29.938 ***** 2025-10-08 16:03:36.958293 | orchestrator | skipping: [testbed-node-0] 2025-10-08 16:03:36.958301 | orchestrator | skipping: [testbed-node-1] 2025-10-08 16:03:36.958309 | orchestrator | skipping: [testbed-node-2] 2025-10-08 16:03:36.958317 | orchestrator | 2025-10-08 16:03:36.958325 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-10-08 16:03:36.958333 | orchestrator | Wednesday 08 October 2025 15:58:01 +0000 (0:00:00.232) 0:03:30.171 ***** 2025-10-08 16:03:36.958342 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-08 16:03:36.958350 | orchestrator | 2025-10-08 16:03:36.958358 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-10-08 16:03:36.958366 | orchestrator | Wednesday 08 October 2025 15:58:01 +0000 (0:00:00.516) 0:03:30.688 ***** 2025-10-08 16:03:36.958380 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-10-08 16:03:36.958390 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-10-08 16:03:36.958424 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-10-08 16:03:36.958447 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-10-08 16:03:36.958461 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-10-08 16:03:36.958487 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-10-08 16:03:36.958497 | orchestrator | 2025-10-08 16:03:36.958505 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-10-08 16:03:36.958513 | orchestrator | Wednesday 08 October 2025 15:58:04 +0000 (0:00:03.042) 0:03:33.731 ***** 2025-10-08 16:03:36.958522 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-10-08 16:03:36.958537 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-10-08 16:03:36.958545 | orchestrator | skipping: [testbed-node-1] 2025-10-08 16:03:36.958580 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-10-08 16:03:36.958595 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-10-08 16:03:36.958604 | orchestrator | skipping: [testbed-node-0] 2025-10-08 16:03:36.958613 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-10-08 16:03:36.958622 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-10-08 16:03:36.958636 | orchestrator | skipping: [testbed-node-2] 2025-10-08 16:03:36.958646 | orchestrator | 2025-10-08 16:03:36.958654 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-10-08 16:03:36.958663 | orchestrator | Wednesday 08 October 2025 15:58:06 +0000 (0:00:01.997) 0:03:35.729 ***** 2025-10-08 16:03:36.958699 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-10-08 16:03:36.958715 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-10-08 16:03:36.958724 | orchestrator | skipping: [testbed-node-1] 2025-10-08 16:03:36.958734 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-10-08 16:03:36.958744 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-10-08 16:03:36.958758 | orchestrator | skipping: [testbed-node-0] 2025-10-08 16:03:36.958792 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-10-08 16:03:36.958803 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-10-08 16:03:36.958812 | orchestrator | skipping: [testbed-node-2] 2025-10-08 16:03:36.958822 | orchestrator | 2025-10-08 16:03:36.958831 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2025-10-08 16:03:36.958840 | orchestrator | Wednesday 08 October 2025 15:58:08 +0000 (0:00:01.375) 0:03:37.105 ***** 2025-10-08 16:03:36.958854 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-10-08 16:03:36.958870 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-10-08 16:03:36.958904 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-10-08 16:03:36.958919 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-10-08 16:03:36.958930 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-10-08 16:03:36.958940 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-10-08 16:03:36.958955 | orchestrator | 2025-10-08 16:03:36.958964 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2025-10-08 16:03:36.958972 | orchestrator | Wednesday 08 October 2025 15:58:10 +0000 (0:00:02.482) 0:03:39.588 ***** 2025-10-08 16:03:36.959004 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-10-08 16:03:36.959015 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-10-08 16:03:36.959028 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-10-08 16:03:36.959043 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-10-08 16:03:36.959051 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-10-08 16:03:36.959082 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-10-08 16:03:36.959092 | orchestrator | 2025-10-08 16:03:36.959100 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2025-10-08 16:03:36.959108 | orchestrator | Wednesday 08 October 2025 15:58:18 +0000 (0:00:08.063) 0:03:47.651 ***** 2025-10-08 16:03:36.959121 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-10-08 16:03:36.959130 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-10-08 16:03:36.959161 | orchestrator | skipping: [testbed-node-0] 2025-10-08 16:03:36.959170 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-10-08 16:03:36.959179 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-10-08 16:03:36.959187 | orchestrator | skipping: [testbed-node-1] 2025-10-08 16:03:36.959220 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-10-08 16:03:36.959237 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-10-08 16:03:36.959246 | orchestrator | skipping: [testbed-node-2] 2025-10-08 16:03:36.959254 | orchestrator | 2025-10-08 16:03:36.959262 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2025-10-08 16:03:36.959277 | orchestrator | Wednesday 08 October 2025 15:58:19 +0000 (0:00:00.481) 0:03:48.132 ***** 2025-10-08 16:03:36.959285 | orchestrator | changed: [testbed-node-0] 2025-10-08 16:03:36.959294 | orchestrator | changed: [testbed-node-1] 2025-10-08 16:03:36.959302 | orchestrator | changed: [testbed-node-2] 2025-10-08 16:03:36.959310 | orchestrator | 2025-10-08 16:03:36.959317 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2025-10-08 16:03:36.959325 | orchestrator | Wednesday 08 October 2025 15:58:20 +0000 (0:00:01.478) 0:03:49.611 ***** 2025-10-08 16:03:36.959333 | orchestrator | skipping: [testbed-node-0] 2025-10-08 16:03:36.959341 | orchestrator | skipping: [testbed-node-1] 2025-10-08 16:03:36.959349 | orchestrator | skipping: [testbed-node-2] 2025-10-08 16:03:36.959357 | orchestrator | 2025-10-08 16:03:36.959365 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2025-10-08 16:03:36.959373 | orchestrator | Wednesday 08 October 2025 15:58:21 +0000 (0:00:00.426) 0:03:50.037 ***** 2025-10-08 16:03:36.959381 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-10-08 16:03:36.959417 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-10-08 16:03:36.959431 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-10-08 16:03:36.959446 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-10-08 16:03:36.959455 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-10-08 16:03:36.959463 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-10-08 16:03:36.959471 | orchestrator | 2025-10-08 16:03:36.959479 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-10-08 16:03:36.959488 | orchestrator | Wednesday 08 October 2025 15:58:23 +0000 (0:00:02.521) 0:03:52.559 ***** 2025-10-08 16:03:36.959496 | orchestrator | 2025-10-08 16:03:36.959504 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-10-08 16:03:36.959534 | orchestrator | Wednesday 08 October 2025 15:58:23 +0000 (0:00:00.283) 0:03:52.842 ***** 2025-10-08 16:03:36.959543 | orchestrator | 2025-10-08 16:03:36.959551 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-10-08 16:03:36.959559 | orchestrator | Wednesday 08 October 2025 15:58:24 +0000 (0:00:00.148) 0:03:52.990 ***** 2025-10-08 16:03:36.959567 | orchestrator | 2025-10-08 16:03:36.959575 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2025-10-08 16:03:36.959583 | orchestrator | Wednesday 08 October 2025 15:58:24 +0000 (0:00:00.151) 0:03:53.142 ***** 2025-10-08 16:03:36.959591 | orchestrator | changed: [testbed-node-0] 2025-10-08 16:03:36.959598 | orchestrator | changed: [testbed-node-1] 2025-10-08 16:03:36.959606 | orchestrator | changed: [testbed-node-2] 2025-10-08 16:03:36.959614 | orchestrator | 2025-10-08 16:03:36.959622 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2025-10-08 16:03:36.959630 | orchestrator | Wednesday 08 October 2025 15:58:48 +0000 (0:00:24.190) 0:04:17.333 ***** 2025-10-08 16:03:36.959638 | orchestrator | changed: [testbed-node-2] 2025-10-08 16:03:36.959646 | orchestrator | changed: [testbed-node-1] 2025-10-08 16:03:36.959660 | orchestrator | changed: [testbed-node-0] 2025-10-08 16:03:36.959668 | orchestrator | 2025-10-08 16:03:36.959676 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2025-10-08 16:03:36.959684 | orchestrator | 2025-10-08 16:03:36.959691 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-10-08 16:03:36.959699 | orchestrator | Wednesday 08 October 2025 15:58:59 +0000 (0:00:11.348) 0:04:28.682 ***** 2025-10-08 16:03:36.959707 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-10-08 16:03:36.959717 | orchestrator | 2025-10-08 16:03:36.959725 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-10-08 16:03:36.959733 | orchestrator | Wednesday 08 October 2025 15:59:02 +0000 (0:00:02.302) 0:04:30.984 ***** 2025-10-08 16:03:36.959740 | orchestrator | skipping: [testbed-node-3] 2025-10-08 16:03:36.959749 | orchestrator | skipping: [testbed-node-4] 2025-10-08 16:03:36.959756 | orchestrator | skipping: [testbed-node-5] 2025-10-08 16:03:36.959769 | orchestrator | skipping: [testbed-node-0] 2025-10-08 16:03:36.959777 | orchestrator | skipping: [testbed-node-1] 2025-10-08 16:03:36.959785 | orchestrator | skipping: [testbed-node-2] 2025-10-08 16:03:36.959793 | orchestrator | 2025-10-08 16:03:36.959800 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2025-10-08 16:03:36.959808 | orchestrator | Wednesday 08 October 2025 15:59:02 +0000 (0:00:00.909) 0:04:31.894 ***** 2025-10-08 16:03:36.959816 | orchestrator | skipping: [testbed-node-0] 2025-10-08 16:03:36.959824 | orchestrator | skipping: [testbed-node-1] 2025-10-08 16:03:36.959832 | orchestrator | skipping: [testbed-node-2] 2025-10-08 16:03:36.959840 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2025-10-08 16:03:36.959848 | orchestrator | 2025-10-08 16:03:36.959856 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-10-08 16:03:36.959864 | orchestrator | Wednesday 08 October 2025 15:59:04 +0000 (0:00:01.614) 0:04:33.508 ***** 2025-10-08 16:03:36.959872 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2025-10-08 16:03:36.959880 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2025-10-08 16:03:36.959888 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2025-10-08 16:03:36.959896 | orchestrator | 2025-10-08 16:03:36.959904 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-10-08 16:03:36.959912 | orchestrator | Wednesday 08 October 2025 15:59:05 +0000 (0:00:01.250) 0:04:34.759 ***** 2025-10-08 16:03:36.959920 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2025-10-08 16:03:36.959928 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2025-10-08 16:03:36.959936 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2025-10-08 16:03:36.959944 | orchestrator | 2025-10-08 16:03:36.959952 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-10-08 16:03:36.959960 | orchestrator | Wednesday 08 October 2025 15:59:07 +0000 (0:00:01.483) 0:04:36.242 ***** 2025-10-08 16:03:36.959968 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2025-10-08 16:03:36.959976 | orchestrator | skipping: [testbed-node-3] 2025-10-08 16:03:36.959984 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2025-10-08 16:03:36.959992 | orchestrator | skipping: [testbed-node-4] 2025-10-08 16:03:36.960000 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2025-10-08 16:03:36.960008 | orchestrator | skipping: [testbed-node-5] 2025-10-08 16:03:36.960016 | orchestrator | 2025-10-08 16:03:36.960024 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2025-10-08 16:03:36.960032 | orchestrator | Wednesday 08 October 2025 15:59:08 +0000 (0:00:01.692) 0:04:37.935 ***** 2025-10-08 16:03:36.960040 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-10-08 16:03:36.960048 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-10-08 16:03:36.960066 | orchestrator | skipping: [testbed-node-0] 2025-10-08 16:03:36.960074 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-10-08 16:03:36.960082 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-10-08 16:03:36.960090 | orchestrator | skipping: [testbed-node-1] 2025-10-08 16:03:36.960098 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-10-08 16:03:36.960105 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-10-08 16:03:36.960113 | orchestrator | skipping: [testbed-node-2] 2025-10-08 16:03:36.960121 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2025-10-08 16:03:36.960129 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2025-10-08 16:03:36.960137 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2025-10-08 16:03:36.960217 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-10-08 16:03:36.960229 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-10-08 16:03:36.960237 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-10-08 16:03:36.960245 | orchestrator | 2025-10-08 16:03:36.960253 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2025-10-08 16:03:36.960261 | orchestrator | Wednesday 08 October 2025 15:59:11 +0000 (0:00:02.244) 0:04:40.179 ***** 2025-10-08 16:03:36.960269 | orchestrator | changed: [testbed-node-3] 2025-10-08 16:03:36.960277 | orchestrator | skipping: [testbed-node-0] 2025-10-08 16:03:36.960285 | orchestrator | skipping: [testbed-node-1] 2025-10-08 16:03:36.960293 | orchestrator | skipping: [testbed-node-2] 2025-10-08 16:03:36.960301 | orchestrator | changed: [testbed-node-4] 2025-10-08 16:03:36.960309 | orchestrator | changed: [testbed-node-5] 2025-10-08 16:03:36.960316 | orchestrator | 2025-10-08 16:03:36.960324 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2025-10-08 16:03:36.960332 | orchestrator | Wednesday 08 October 2025 15:59:13 +0000 (0:00:01.819) 0:04:41.999 ***** 2025-10-08 16:03:36.960340 | orchestrator | skipping: [testbed-node-0] 2025-10-08 16:03:36.960348 | orchestrator | skipping: [testbed-node-1] 2025-10-08 16:03:36.960356 | orchestrator | skipping: [testbed-node-2] 2025-10-08 16:03:36.960364 | orchestrator | changed: [testbed-node-4] 2025-10-08 16:03:36.960372 | orchestrator | changed: [testbed-node-5] 2025-10-08 16:03:36.960380 | orchestrator | changed: [testbed-node-3] 2025-10-08 16:03:36.960388 | orchestrator | 2025-10-08 16:03:36.960395 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-10-08 16:03:36.960404 | orchestrator | Wednesday 08 October 2025 15:59:15 +0000 (0:00:01.962) 0:04:43.962 ***** 2025-10-08 16:03:36.960417 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-10-08 16:03:36.960428 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-10-08 16:03:36.960444 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-10-08 16:03:36.960453 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-10-08 16:03:36.960486 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-10-08 16:03:36.960496 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-10-08 16:03:36.960509 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-10-08 16:03:36.960517 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-10-08 16:03:36.960533 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-10-08 16:03:36.960565 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-10-08 16:03:36.960575 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-10-08 16:03:36.960583 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-10-08 16:03:36.960596 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-10-08 16:03:36.960605 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-10-08 16:03:36.960620 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-10-08 16:03:36.960628 | orchestrator | 2025-10-08 16:03:36.960636 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-10-08 16:03:36.960644 | orchestrator | Wednesday 08 October 2025 15:59:17 +0000 (0:00:02.551) 0:04:46.513 ***** 2025-10-08 16:03:36.960651 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-10-08 16:03:36.960659 | orchestrator | 2025-10-08 16:03:36.960666 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-10-08 16:03:36.960673 | orchestrator | Wednesday 08 October 2025 15:59:18 +0000 (0:00:01.215) 0:04:47.728 ***** 2025-10-08 16:03:36.960700 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-10-08 16:03:36.960708 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-10-08 16:03:36.960719 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-10-08 16:03:36.960732 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-10-08 16:03:36.960740 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-10-08 16:03:36.960747 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-10-08 16:03:36.960772 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-10-08 16:03:36.960780 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-10-08 16:03:36.960793 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-10-08 16:03:36.960805 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-10-08 16:03:36.960812 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-10-08 16:03:36.960819 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-10-08 16:03:36.960845 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-10-08 16:03:36.960853 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-10-08 16:03:36.960864 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-10-08 16:03:36.960877 | orchestrator | 2025-10-08 16:03:36.960884 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-10-08 16:03:36.960891 | orchestrator | Wednesday 08 October 2025 15:59:23 +0000 (0:00:04.298) 0:04:52.027 ***** 2025-10-08 16:03:36.960898 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-10-08 16:03:36.960905 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-10-08 16:03:36.960912 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-10-08 16:03:36.960937 | orchestrator | skipping: [testbed-node-3] 2025-10-08 16:03:36.960945 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-10-08 16:03:36.960956 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-10-08 16:03:36.960968 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-10-08 16:03:36.960975 | orchestrator | skipping: [testbed-node-5] 2025-10-08 16:03:36.960983 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-10-08 16:03:36.960990 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-10-08 16:03:36.961017 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-10-08 16:03:36.961025 | orchestrator | skipping: [testbed-node-4] 2025-10-08 16:03:36.961032 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-10-08 16:03:36.961071 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-10-08 16:03:36.961079 | orchestrator | skipping: [testbed-node-1] 2025-10-08 16:03:36.961086 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-10-08 16:03:36.961093 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-10-08 16:03:36.961100 | orchestrator | skipping: [testbed-node-0] 2025-10-08 16:03:36.961107 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-10-08 16:03:36.961132 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-10-08 16:03:36.961140 | orchestrator | skipping: [testbed-node-2] 2025-10-08 16:03:36.961160 | orchestrator | 2025-10-08 16:03:36.961167 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-10-08 16:03:36.961174 | orchestrator | Wednesday 08 October 2025 15:59:25 +0000 (0:00:02.873) 0:04:54.900 ***** 2025-10-08 16:03:36.961181 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-10-08 16:03:36.961199 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-10-08 16:03:36.961206 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-10-08 16:03:36.961213 | orchestrator | skipping: [testbed-node-4] 2025-10-08 16:03:36.961220 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-10-08 16:03:36.961227 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-10-08 16:03:36.961254 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-10-08 16:03:36.961266 | orchestrator | skipping: [testbed-node-3] 2025-10-08 16:03:36.961277 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-10-08 16:03:36.961284 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-10-08 16:03:36.961291 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-10-08 16:03:36.961298 | orchestrator | skipping: [testbed-node-5] 2025-10-08 16:03:36.961305 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-10-08 16:03:36.961330 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-10-08 16:03:36.961338 | orchestrator | skipping: [testbed-node-1] 2025-10-08 16:03:36.961350 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-10-08 16:03:36.961357 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-10-08 16:03:36.961364 | orchestrator | skipping: [testbed-node-0] 2025-10-08 16:03:36.961375 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-10-08 16:03:36.961382 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-10-08 16:03:36.961389 | orchestrator | skipping: [testbed-node-2] 2025-10-08 16:03:36.961396 | orchestrator | 2025-10-08 16:03:36.961403 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-10-08 16:03:36.961410 | orchestrator | Wednesday 08 October 2025 15:59:28 +0000 (0:00:02.546) 0:04:57.447 ***** 2025-10-08 16:03:36.961416 | orchestrator | skipping: [testbed-node-0] 2025-10-08 16:03:36.961423 | orchestrator | skipping: [testbed-node-1] 2025-10-08 16:03:36.961430 | orchestrator | skipping: [testbed-node-2] 2025-10-08 16:03:36.961437 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-10-08 16:03:36.961443 | orchestrator | 2025-10-08 16:03:36.961450 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2025-10-08 16:03:36.961457 | orchestrator | Wednesday 08 October 2025 15:59:29 +0000 (0:00:01.149) 0:04:58.596 ***** 2025-10-08 16:03:36.961463 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-10-08 16:03:36.961470 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-10-08 16:03:36.961477 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-10-08 16:03:36.961483 | orchestrator | 2025-10-08 16:03:36.961490 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2025-10-08 16:03:36.961497 | orchestrator | Wednesday 08 October 2025 15:59:30 +0000 (0:00:01.281) 0:04:59.877 ***** 2025-10-08 16:03:36.961503 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-10-08 16:03:36.961510 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-10-08 16:03:36.961521 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-10-08 16:03:36.961528 | orchestrator | 2025-10-08 16:03:36.961535 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2025-10-08 16:03:36.961542 | orchestrator | Wednesday 08 October 2025 15:59:31 +0000 (0:00:01.051) 0:05:00.929 ***** 2025-10-08 16:03:36.961548 | orchestrator | ok: [testbed-node-3] 2025-10-08 16:03:36.961555 | orchestrator | ok: [testbed-node-4] 2025-10-08 16:03:36.961562 | orchestrator | ok: [testbed-node-5] 2025-10-08 16:03:36.961568 | orchestrator | 2025-10-08 16:03:36.961575 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2025-10-08 16:03:36.961582 | orchestrator | Wednesday 08 October 2025 15:59:32 +0000 (0:00:00.544) 0:05:01.473 ***** 2025-10-08 16:03:36.961589 | orchestrator | ok: [testbed-node-3] 2025-10-08 16:03:36.961595 | orchestrator | ok: [testbed-node-4] 2025-10-08 16:03:36.961602 | orchestrator | ok: [testbed-node-5] 2025-10-08 16:03:36.961609 | orchestrator | 2025-10-08 16:03:36.961634 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2025-10-08 16:03:36.961642 | orchestrator | Wednesday 08 October 2025 15:59:33 +0000 (0:00:00.810) 0:05:02.284 ***** 2025-10-08 16:03:36.961649 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-10-08 16:03:36.961655 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-10-08 16:03:36.961662 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-10-08 16:03:36.961669 | orchestrator | 2025-10-08 16:03:36.961676 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2025-10-08 16:03:36.961682 | orchestrator | Wednesday 08 October 2025 15:59:34 +0000 (0:00:01.326) 0:05:03.610 ***** 2025-10-08 16:03:36.961689 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-10-08 16:03:36.961696 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-10-08 16:03:36.961703 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-10-08 16:03:36.961710 | orchestrator | 2025-10-08 16:03:36.961716 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2025-10-08 16:03:36.961723 | orchestrator | Wednesday 08 October 2025 15:59:36 +0000 (0:00:01.347) 0:05:04.958 ***** 2025-10-08 16:03:36.961730 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-10-08 16:03:36.961736 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-10-08 16:03:36.961743 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-10-08 16:03:36.961750 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2025-10-08 16:03:36.961756 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2025-10-08 16:03:36.961763 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2025-10-08 16:03:36.961770 | orchestrator | 2025-10-08 16:03:36.961776 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2025-10-08 16:03:36.961783 | orchestrator | Wednesday 08 October 2025 15:59:40 +0000 (0:00:04.192) 0:05:09.150 ***** 2025-10-08 16:03:36.961790 | orchestrator | skipping: [testbed-node-3] 2025-10-08 16:03:36.961797 | orchestrator | skipping: [testbed-node-4] 2025-10-08 16:03:36.961809 | orchestrator | skipping: [testbed-node-5] 2025-10-08 16:03:36.961816 | orchestrator | 2025-10-08 16:03:36.961822 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2025-10-08 16:03:36.961829 | orchestrator | Wednesday 08 October 2025 15:59:40 +0000 (0:00:00.588) 0:05:09.739 ***** 2025-10-08 16:03:36.961836 | orchestrator | skipping: [testbed-node-3] 2025-10-08 16:03:36.961843 | orchestrator | skipping: [testbed-node-4] 2025-10-08 16:03:36.961849 | orchestrator | skipping: [testbed-node-5] 2025-10-08 16:03:36.961856 | orchestrator | 2025-10-08 16:03:36.961863 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2025-10-08 16:03:36.961870 | orchestrator | Wednesday 08 October 2025 15:59:41 +0000 (0:00:00.358) 0:05:10.097 ***** 2025-10-08 16:03:36.961877 | orchestrator | changed: [testbed-node-3] 2025-10-08 16:03:36.961883 | orchestrator | changed: [testbed-node-4] 2025-10-08 16:03:36.961890 | orchestrator | changed: [testbed-node-5] 2025-10-08 16:03:36.961902 | orchestrator | 2025-10-08 16:03:36.961909 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2025-10-08 16:03:36.961915 | orchestrator | Wednesday 08 October 2025 15:59:42 +0000 (0:00:01.377) 0:05:11.475 ***** 2025-10-08 16:03:36.961922 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-10-08 16:03:36.961930 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-10-08 16:03:36.961937 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-10-08 16:03:36.961944 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-10-08 16:03:36.961951 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-10-08 16:03:36.961958 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-10-08 16:03:36.961964 | orchestrator | 2025-10-08 16:03:36.961971 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2025-10-08 16:03:36.961978 | orchestrator | Wednesday 08 October 2025 15:59:46 +0000 (0:00:03.597) 0:05:15.072 ***** 2025-10-08 16:03:36.961985 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-10-08 16:03:36.961992 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-10-08 16:03:36.961999 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-10-08 16:03:36.962006 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-10-08 16:03:36.962032 | orchestrator | changed: [testbed-node-3] 2025-10-08 16:03:36.962040 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-10-08 16:03:36.962047 | orchestrator | changed: [testbed-node-4] 2025-10-08 16:03:36.962054 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-10-08 16:03:36.962061 | orchestrator | changed: [testbed-node-5] 2025-10-08 16:03:36.962067 | orchestrator | 2025-10-08 16:03:36.962074 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2025-10-08 16:03:36.962081 | orchestrator | Wednesday 08 October 2025 15:59:49 +0000 (0:00:03.858) 0:05:18.931 ***** 2025-10-08 16:03:36.962088 | orchestrator | skipping: [testbed-node-3] 2025-10-08 16:03:36.962094 | orchestrator | 2025-10-08 16:03:36.962101 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2025-10-08 16:03:36.962108 | orchestrator | Wednesday 08 October 2025 15:59:50 +0000 (0:00:00.143) 0:05:19.074 ***** 2025-10-08 16:03:36.962115 | orchestrator | skipping: [testbed-node-3] 2025-10-08 16:03:36.962122 | orchestrator | skipping: [testbed-node-4] 2025-10-08 16:03:36.962128 | orchestrator | skipping: [testbed-node-5] 2025-10-08 16:03:36.962167 | orchestrator | skipping: [testbed-node-0] 2025-10-08 16:03:36.962176 | orchestrator | skipping: [testbed-node-1] 2025-10-08 16:03:36.962183 | orchestrator | skipping: [testbed-node-2] 2025-10-08 16:03:36.962190 | orchestrator | 2025-10-08 16:03:36.962197 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2025-10-08 16:03:36.962205 | orchestrator | Wednesday 08 October 2025 15:59:50 +0000 (0:00:00.636) 0:05:19.710 ***** 2025-10-08 16:03:36.962212 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-10-08 16:03:36.962218 | orchestrator | 2025-10-08 16:03:36.962226 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2025-10-08 16:03:36.962233 | orchestrator | Wednesday 08 October 2025 15:59:51 +0000 (0:00:00.644) 0:05:20.355 ***** 2025-10-08 16:03:36.962240 | orchestrator | skipping: [testbed-node-3] 2025-10-08 16:03:36.962247 | orchestrator | skipping: [testbed-node-4] 2025-10-08 16:03:36.962254 | orchestrator | skipping: [testbed-node-5] 2025-10-08 16:03:36.962261 | orchestrator | skipping: [testbed-node-0] 2025-10-08 16:03:36.962268 | orchestrator | skipping: [testbed-node-1] 2025-10-08 16:03:36.962280 | orchestrator | skipping: [testbed-node-2] 2025-10-08 16:03:36.962287 | orchestrator | 2025-10-08 16:03:36.962294 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2025-10-08 16:03:36.962301 | orchestrator | Wednesday 08 October 2025 15:59:52 +0000 (0:00:00.672) 0:05:21.027 ***** 2025-10-08 16:03:36.962314 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-10-08 16:03:36.962322 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-10-08 16:03:36.962329 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-10-08 16:03:36.962337 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-10-08 16:03:36.962349 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-10-08 16:03:36.962363 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-10-08 16:03:36.962374 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-10-08 16:03:36.962382 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-10-08 16:03:36.962389 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-10-08 16:03:36.962397 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-10-08 16:03:36.962404 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-10-08 16:03:36.962417 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-10-08 16:03:36.962432 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-10-08 16:03:36.962443 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-10-08 16:03:36.962451 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-10-08 16:03:36.962458 | orchestrator | 2025-10-08 16:03:36.962466 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2025-10-08 16:03:36.962473 | orchestrator | Wednesday 08 October 2025 15:59:55 +0000 (0:00:03.482) 0:05:24.510 ***** 2025-10-08 16:03:36.962480 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-10-08 16:03:36.962491 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-10-08 16:03:36.962505 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-10-08 16:03:36.962516 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-10-08 16:03:36.962524 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-10-08 16:03:36.962531 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-10-08 16:03:36.962543 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-10-08 16:03:36.962555 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-10-08 16:03:36.962566 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-10-08 16:03:36.962574 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-10-08 16:03:36.962582 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-10-08 16:03:36.962590 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-10-08 16:03:36.962597 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-10-08 16:03:36.962613 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-10-08 16:03:36.962621 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-10-08 16:03:36.962629 | orchestrator | 2025-10-08 16:03:36.962636 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2025-10-08 16:03:36.962643 | orchestrator | Wednesday 08 October 2025 16:00:01 +0000 (0:00:05.925) 0:05:30.435 ***** 2025-10-08 16:03:36.962650 | orchestrator | skipping: [testbed-node-3] 2025-10-08 16:03:36.962657 | orchestrator | skipping: [testbed-node-5] 2025-10-08 16:03:36.962664 | orchestrator | skipping: [testbed-node-4] 2025-10-08 16:03:36.962671 | orchestrator | skipping: [testbed-node-0] 2025-10-08 16:03:36.962678 | orchestrator | skipping: [testbed-node-2] 2025-10-08 16:03:36.962688 | orchestrator | skipping: [testbed-node-1] 2025-10-08 16:03:36.962695 | orchestrator | 2025-10-08 16:03:36.962702 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2025-10-08 16:03:36.962709 | orchestrator | Wednesday 08 October 2025 16:00:02 +0000 (0:00:01.302) 0:05:31.738 ***** 2025-10-08 16:03:36.962716 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-10-08 16:03:36.962723 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-10-08 16:03:36.962730 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-10-08 16:03:36.962737 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-10-08 16:03:36.962743 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-10-08 16:03:36.962750 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-10-08 16:03:36.962757 | orchestrator | skipping: [testbed-node-1] 2025-10-08 16:03:36.962764 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-10-08 16:03:36.962771 | orchestrator | skipping: [testbed-node-0] 2025-10-08 16:03:36.962778 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-10-08 16:03:36.962785 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-10-08 16:03:36.962792 | orchestrator | skipping: [testbed-node-2] 2025-10-08 16:03:36.962799 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-10-08 16:03:36.962806 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-10-08 16:03:36.962813 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-10-08 16:03:36.962824 | orchestrator | 2025-10-08 16:03:36.962831 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2025-10-08 16:03:36.962838 | orchestrator | Wednesday 08 October 2025 16:00:06 +0000 (0:00:03.728) 0:05:35.466 ***** 2025-10-08 16:03:36.962845 | orchestrator | skipping: [testbed-node-3] 2025-10-08 16:03:36.962852 | orchestrator | skipping: [testbed-node-4] 2025-10-08 16:03:36.962858 | orchestrator | skipping: [testbed-node-5] 2025-10-08 16:03:36.962865 | orchestrator | skipping: [testbed-node-0] 2025-10-08 16:03:36.962872 | orchestrator | skipping: [testbed-node-1] 2025-10-08 16:03:36.962879 | orchestrator | skipping: [testbed-node-2] 2025-10-08 16:03:36.962886 | orchestrator | 2025-10-08 16:03:36.962893 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2025-10-08 16:03:36.962900 | orchestrator | Wednesday 08 October 2025 16:00:07 +0000 (0:00:00.600) 0:05:36.066 ***** 2025-10-08 16:03:36.962907 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-10-08 16:03:36.962914 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-10-08 16:03:36.962921 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-10-08 16:03:36.962928 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-10-08 16:03:36.962935 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-10-08 16:03:36.962945 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-10-08 16:03:36.962952 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-10-08 16:03:36.962959 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-10-08 16:03:36.962966 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-10-08 16:03:36.962973 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-10-08 16:03:36.962980 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-10-08 16:03:36.962987 | orchestrator | skipping: [testbed-node-0] 2025-10-08 16:03:36.962994 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-10-08 16:03:36.963001 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-10-08 16:03:36.963008 | orchestrator | skipping: [testbed-node-1] 2025-10-08 16:03:36.963015 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-10-08 16:03:36.963021 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-10-08 16:03:36.963028 | orchestrator | skipping: [testbed-node-2] 2025-10-08 16:03:36.963035 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-10-08 16:03:36.963048 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-10-08 16:03:36.963055 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-10-08 16:03:36.963061 | orchestrator | 2025-10-08 16:03:36.963069 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2025-10-08 16:03:36.963076 | orchestrator | Wednesday 08 October 2025 16:00:14 +0000 (0:00:06.991) 0:05:43.058 ***** 2025-10-08 16:03:36.963082 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-10-08 16:03:36.963094 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-10-08 16:03:36.963101 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-10-08 16:03:36.963108 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-10-08 16:03:36.963115 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-10-08 16:03:36.963122 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-10-08 16:03:36.963129 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-10-08 16:03:36.963136 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-10-08 16:03:36.963142 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-10-08 16:03:36.963184 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-10-08 16:03:36.963191 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-10-08 16:03:36.963198 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-10-08 16:03:36.963205 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-10-08 16:03:36.963212 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-10-08 16:03:36.963219 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-10-08 16:03:36.963226 | orchestrator | skipping: [testbed-node-0] 2025-10-08 16:03:36.963233 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-10-08 16:03:36.963240 | orchestrator | skipping: [testbed-node-1] 2025-10-08 16:03:36.963247 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-10-08 16:03:36.963254 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-10-08 16:03:36.963261 | orchestrator | skipping: [testbed-node-2] 2025-10-08 16:03:36.963268 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-10-08 16:03:36.963275 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-10-08 16:03:36.963281 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-10-08 16:03:36.963288 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-10-08 16:03:36.963295 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-10-08 16:03:36.963306 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-10-08 16:03:36.963313 | orchestrator | 2025-10-08 16:03:36.963320 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2025-10-08 16:03:36.963327 | orchestrator | Wednesday 08 October 2025 16:00:23 +0000 (0:00:09.212) 0:05:52.270 ***** 2025-10-08 16:03:36.963334 | orchestrator | skipping: [testbed-node-3] 2025-10-08 16:03:36.963341 | orchestrator | skipping: [testbed-node-4] 2025-10-08 16:03:36.963347 | orchestrator | skipping: [testbed-node-5] 2025-10-08 16:03:36.963354 | orchestrator | skipping: [testbed-node-0] 2025-10-08 16:03:36.963361 | orchestrator | skipping: [testbed-node-1] 2025-10-08 16:03:36.963368 | orchestrator | skipping: [testbed-node-2] 2025-10-08 16:03:36.963375 | orchestrator | 2025-10-08 16:03:36.963381 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2025-10-08 16:03:36.963388 | orchestrator | Wednesday 08 October 2025 16:00:24 +0000 (0:00:00.666) 0:05:52.937 ***** 2025-10-08 16:03:36.963395 | orchestrator | skipping: [testbed-node-3] 2025-10-08 16:03:36.963402 | orchestrator | skipping: [testbed-node-4] 2025-10-08 16:03:36.963414 | orchestrator | skipping: [testbed-node-5] 2025-10-08 16:03:36.963421 | orchestrator | skipping: [testbed-node-0] 2025-10-08 16:03:36.963427 | orchestrator | skipping: [testbed-node-1] 2025-10-08 16:03:36.963434 | orchestrator | skipping: [testbed-node-2] 2025-10-08 16:03:36.963441 | orchestrator | 2025-10-08 16:03:36.963448 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2025-10-08 16:03:36.963455 | orchestrator | Wednesday 08 October 2025 16:00:24 +0000 (0:00:00.604) 0:05:53.542 ***** 2025-10-08 16:03:36.963461 | orchestrator | skipping: [testbed-node-1] 2025-10-08 16:03:36.963468 | orchestrator | skipping: [testbed-node-0] 2025-10-08 16:03:36.963475 | orchestrator | skipping: [testbed-node-2] 2025-10-08 16:03:36.963482 | orchestrator | changed: [testbed-node-3] 2025-10-08 16:03:36.963489 | orchestrator | changed: [testbed-node-5] 2025-10-08 16:03:36.963495 | orchestrator | changed: [testbed-node-4] 2025-10-08 16:03:36.963502 | orchestrator | 2025-10-08 16:03:36.963509 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2025-10-08 16:03:36.963516 | orchestrator | Wednesday 08 October 2025 16:00:27 +0000 (0:00:02.442) 0:05:55.985 ***** 2025-10-08 16:03:36.963527 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-10-08 16:03:36.963535 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-10-08 16:03:36.963542 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-10-08 16:03:36.963550 | orchestrator | skipping: [testbed-node-3] 2025-10-08 16:03:36.963563 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-10-08 16:03:36.963576 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-10-08 16:03:36.963587 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-10-08 16:03:36.963595 | orchestrator | skipping: [testbed-node-4] 2025-10-08 16:03:36.963602 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-10-08 16:03:36.963610 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-10-08 16:03:36.963617 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-10-08 16:03:36.963633 | orchestrator | skipping: [testbed-node-5] 2025-10-08 16:03:36.963640 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-10-08 16:03:36.963648 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-10-08 16:03:36.963655 | orchestrator | skipping: [testbed-node-0] 2025-10-08 16:03:36.963667 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-10-08 16:03:36.963674 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-10-08 16:03:36.963681 | orchestrator | skipping: [testbed-node-1] 2025-10-08 16:03:36.963688 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-10-08 16:03:36.963696 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-10-08 16:03:36.963703 | orchestrator | skipping: [testbed-node-2] 2025-10-08 16:03:36.963714 | orchestrator | 2025-10-08 16:03:36.963721 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2025-10-08 16:03:36.963727 | orchestrator | Wednesday 08 October 2025 16:00:28 +0000 (0:00:01.430) 0:05:57.416 ***** 2025-10-08 16:03:36.963734 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-10-08 16:03:36.963740 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-10-08 16:03:36.963747 | orchestrator | skipping: [testbed-node-3] 2025-10-08 16:03:36.963753 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-10-08 16:03:36.963763 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-10-08 16:03:36.963770 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-10-08 16:03:36.963776 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-10-08 16:03:36.963783 | orchestrator | skipping: [testbed-node-4] 2025-10-08 16:03:36.963789 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-10-08 16:03:36.963795 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-10-08 16:03:36.963802 | orchestrator | skipping: [testbed-node-5] 2025-10-08 16:03:36.963808 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-10-08 16:03:36.963814 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-10-08 16:03:36.963821 | orchestrator | skipping: [testbed-node-0] 2025-10-08 16:03:36.963827 | orchestrator | skipping: [testbed-node-1] 2025-10-08 16:03:36.963833 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-10-08 16:03:36.963840 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-10-08 16:03:36.963846 | orchestrator | skipping: [testbed-node-2] 2025-10-08 16:03:36.963852 | orchestrator | 2025-10-08 16:03:36.963859 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2025-10-08 16:03:36.963865 | orchestrator | Wednesday 08 October 2025 16:00:29 +0000 (0:00:00.894) 0:05:58.311 ***** 2025-10-08 16:03:36.963876 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-10-08 16:03:36.963883 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-10-08 16:03:36.963890 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-10-08 16:03:36.963906 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-10-08 16:03:36.963913 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-10-08 16:03:36.963920 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-10-08 16:03:36.963931 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-10-08 16:03:36.963938 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-10-08 16:03:36.963945 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-10-08 16:03:36.963957 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-10-08 16:03:36.963969 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-10-08 16:03:36.963977 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-10-08 16:03:36.963984 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-10-08 16:03:36.963994 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-10-08 16:03:36.964001 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-10-08 16:03:36.964013 | orchestrator | 2025-10-08 16:03:36.964020 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-10-08 16:03:36.964026 | orchestrator | Wednesday 08 October 2025 16:00:32 +0000 (0:00:02.868) 0:06:01.179 ***** 2025-10-08 16:03:36.964032 | orchestrator | skipping: [testbed-node-3] 2025-10-08 16:03:36.964039 | orchestrator | skipping: [testbed-node-4] 2025-10-08 16:03:36.964045 | orchestrator | skipping: [testbed-node-5] 2025-10-08 16:03:36.964052 | orchestrator | skipping: [testbed-node-0] 2025-10-08 16:03:36.964058 | orchestrator | skipping: [testbed-node-1] 2025-10-08 16:03:36.964064 | orchestrator | skipping: [testbed-node-2] 2025-10-08 16:03:36.964071 | orchestrator | 2025-10-08 16:03:36.964077 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-10-08 16:03:36.964083 | orchestrator | Wednesday 08 October 2025 16:00:33 +0000 (0:00:00.918) 0:06:02.098 ***** 2025-10-08 16:03:36.964090 | orchestrator | 2025-10-08 16:03:36.964096 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-10-08 16:03:36.964102 | orchestrator | Wednesday 08 October 2025 16:00:33 +0000 (0:00:00.140) 0:06:02.238 ***** 2025-10-08 16:03:36.964108 | orchestrator | 2025-10-08 16:03:36.964115 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-10-08 16:03:36.964121 | orchestrator | Wednesday 08 October 2025 16:00:33 +0000 (0:00:00.138) 0:06:02.376 ***** 2025-10-08 16:03:36.964127 | orchestrator | 2025-10-08 16:03:36.964134 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-10-08 16:03:36.964140 | orchestrator | Wednesday 08 October 2025 16:00:33 +0000 (0:00:00.138) 0:06:02.515 ***** 2025-10-08 16:03:36.964161 | orchestrator | 2025-10-08 16:03:36.964171 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-10-08 16:03:36.964178 | orchestrator | Wednesday 08 October 2025 16:00:33 +0000 (0:00:00.133) 0:06:02.648 ***** 2025-10-08 16:03:36.964184 | orchestrator | 2025-10-08 16:03:36.964191 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-10-08 16:03:36.964197 | orchestrator | Wednesday 08 October 2025 16:00:33 +0000 (0:00:00.133) 0:06:02.782 ***** 2025-10-08 16:03:36.964203 | orchestrator | 2025-10-08 16:03:36.964209 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2025-10-08 16:03:36.964216 | orchestrator | Wednesday 08 October 2025 16:00:34 +0000 (0:00:00.330) 0:06:03.113 ***** 2025-10-08 16:03:36.964222 | orchestrator | changed: [testbed-node-0] 2025-10-08 16:03:36.964228 | orchestrator | changed: [testbed-node-1] 2025-10-08 16:03:36.964235 | orchestrator | changed: [testbed-node-2] 2025-10-08 16:03:36.964241 | orchestrator | 2025-10-08 16:03:36.964247 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2025-10-08 16:03:36.964254 | orchestrator | Wednesday 08 October 2025 16:00:41 +0000 (0:00:07.142) 0:06:10.255 ***** 2025-10-08 16:03:36.964260 | orchestrator | changed: [testbed-node-0] 2025-10-08 16:03:36.964266 | orchestrator | changed: [testbed-node-2] 2025-10-08 16:03:36.964273 | orchestrator | changed: [testbed-node-1] 2025-10-08 16:03:36.964279 | orchestrator | 2025-10-08 16:03:36.964285 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2025-10-08 16:03:36.964292 | orchestrator | Wednesday 08 October 2025 16:00:59 +0000 (0:00:17.777) 0:06:28.033 ***** 2025-10-08 16:03:36.964298 | orchestrator | changed: [testbed-node-4] 2025-10-08 16:03:36.964304 | orchestrator | changed: [testbed-node-3] 2025-10-08 16:03:36.964311 | orchestrator | changed: [testbed-node-5] 2025-10-08 16:03:36.964317 | orchestrator | 2025-10-08 16:03:36.964323 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2025-10-08 16:03:36.964337 | orchestrator | Wednesday 08 October 2025 16:01:17 +0000 (0:00:18.660) 0:06:46.694 ***** 2025-10-08 16:03:36.964343 | orchestrator | changed: [testbed-node-3] 2025-10-08 16:03:36.964350 | orchestrator | changed: [testbed-node-5] 2025-10-08 16:03:36.964356 | orchestrator | changed: [testbed-node-4] 2025-10-08 16:03:36.964362 | orchestrator | 2025-10-08 16:03:36.964372 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2025-10-08 16:03:36.964379 | orchestrator | Wednesday 08 October 2025 16:01:49 +0000 (0:00:31.313) 0:07:18.007 ***** 2025-10-08 16:03:36.964385 | orchestrator | FAILED - RETRYING: [testbed-node-5]: Checking libvirt container is ready (10 retries left). 2025-10-08 16:03:36.964391 | orchestrator | FAILED - RETRYING: [testbed-node-3]: Checking libvirt container is ready (10 retries left). 2025-10-08 16:03:36.964398 | orchestrator | FAILED - RETRYING: [testbed-node-4]: Checking libvirt container is ready (10 retries left). 2025-10-08 16:03:36.964404 | orchestrator | changed: [testbed-node-3] 2025-10-08 16:03:36.964411 | orchestrator | changed: [testbed-node-5] 2025-10-08 16:03:36.964417 | orchestrator | changed: [testbed-node-4] 2025-10-08 16:03:36.964423 | orchestrator | 2025-10-08 16:03:36.964430 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2025-10-08 16:03:36.964436 | orchestrator | Wednesday 08 October 2025 16:01:55 +0000 (0:00:06.385) 0:07:24.393 ***** 2025-10-08 16:03:36.964443 | orchestrator | changed: [testbed-node-3] 2025-10-08 16:03:36.964449 | orchestrator | changed: [testbed-node-4] 2025-10-08 16:03:36.964455 | orchestrator | changed: [testbed-node-5] 2025-10-08 16:03:36.964462 | orchestrator | 2025-10-08 16:03:36.964468 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2025-10-08 16:03:36.964474 | orchestrator | Wednesday 08 October 2025 16:01:56 +0000 (0:00:00.856) 0:07:25.249 ***** 2025-10-08 16:03:36.964481 | orchestrator | changed: [testbed-node-5] 2025-10-08 16:03:36.964487 | orchestrator | changed: [testbed-node-4] 2025-10-08 16:03:36.964493 | orchestrator | changed: [testbed-node-3] 2025-10-08 16:03:36.964500 | orchestrator | 2025-10-08 16:03:36.964506 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2025-10-08 16:03:36.964512 | orchestrator | Wednesday 08 October 2025 16:02:18 +0000 (0:00:22.194) 0:07:47.443 ***** 2025-10-08 16:03:36.964519 | orchestrator | skipping: [testbed-node-3] 2025-10-08 16:03:36.964525 | orchestrator | 2025-10-08 16:03:36.964531 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2025-10-08 16:03:36.964538 | orchestrator | Wednesday 08 October 2025 16:02:18 +0000 (0:00:00.139) 0:07:47.582 ***** 2025-10-08 16:03:36.964544 | orchestrator | skipping: [testbed-node-3] 2025-10-08 16:03:36.964550 | orchestrator | skipping: [testbed-node-0] 2025-10-08 16:03:36.964557 | orchestrator | skipping: [testbed-node-4] 2025-10-08 16:03:36.964563 | orchestrator | skipping: [testbed-node-1] 2025-10-08 16:03:36.964569 | orchestrator | skipping: [testbed-node-2] 2025-10-08 16:03:36.964576 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2025-10-08 16:03:36.964582 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-10-08 16:03:36.964589 | orchestrator | 2025-10-08 16:03:36.964595 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2025-10-08 16:03:36.964601 | orchestrator | Wednesday 08 October 2025 16:02:43 +0000 (0:00:24.952) 0:08:12.535 ***** 2025-10-08 16:03:36.964608 | orchestrator | skipping: [testbed-node-0] 2025-10-08 16:03:36.964614 | orchestrator | skipping: [testbed-node-3] 2025-10-08 16:03:36.964620 | orchestrator | skipping: [testbed-node-5] 2025-10-08 16:03:36.964627 | orchestrator | skipping: [testbed-node-4] 2025-10-08 16:03:36.964633 | orchestrator | skipping: [testbed-node-2] 2025-10-08 16:03:36.964639 | orchestrator | skipping: [testbed-node-1] 2025-10-08 16:03:36.964645 | orchestrator | 2025-10-08 16:03:36.964652 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2025-10-08 16:03:36.964658 | orchestrator | Wednesday 08 October 2025 16:02:54 +0000 (0:00:10.636) 0:08:23.172 ***** 2025-10-08 16:03:36.964670 | orchestrator | skipping: [testbed-node-4] 2025-10-08 16:03:36.964676 | orchestrator | skipping: [testbed-node-1] 2025-10-08 16:03:36.964683 | orchestrator | skipping: [testbed-node-3] 2025-10-08 16:03:36.964689 | orchestrator | skipping: [testbed-node-2] 2025-10-08 16:03:36.964695 | orchestrator | skipping: [testbed-node-0] 2025-10-08 16:03:36.964705 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-5 2025-10-08 16:03:36.964712 | orchestrator | 2025-10-08 16:03:36.964719 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-10-08 16:03:36.964725 | orchestrator | Wednesday 08 October 2025 16:02:58 +0000 (0:00:04.404) 0:08:27.576 ***** 2025-10-08 16:03:36.964732 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-10-08 16:03:36.964738 | orchestrator | 2025-10-08 16:03:36.964745 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-10-08 16:03:36.964751 | orchestrator | Wednesday 08 October 2025 16:03:12 +0000 (0:00:13.846) 0:08:41.423 ***** 2025-10-08 16:03:36.964758 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-10-08 16:03:36.964764 | orchestrator | 2025-10-08 16:03:36.964770 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2025-10-08 16:03:36.964777 | orchestrator | Wednesday 08 October 2025 16:03:13 +0000 (0:00:01.376) 0:08:42.799 ***** 2025-10-08 16:03:36.964783 | orchestrator | skipping: [testbed-node-5] 2025-10-08 16:03:36.964790 | orchestrator | 2025-10-08 16:03:36.964797 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2025-10-08 16:03:36.964803 | orchestrator | Wednesday 08 October 2025 16:03:15 +0000 (0:00:01.547) 0:08:44.346 ***** 2025-10-08 16:03:36.964809 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-10-08 16:03:36.964816 | orchestrator | 2025-10-08 16:03:36.964822 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2025-10-08 16:03:36.964829 | orchestrator | Wednesday 08 October 2025 16:03:28 +0000 (0:00:13.162) 0:08:57.509 ***** 2025-10-08 16:03:36.964835 | orchestrator | ok: [testbed-node-3] 2025-10-08 16:03:36.964841 | orchestrator | ok: [testbed-node-4] 2025-10-08 16:03:36.964848 | orchestrator | ok: [testbed-node-5] 2025-10-08 16:03:36.964854 | orchestrator | ok: [testbed-node-0] 2025-10-08 16:03:36.964861 | orchestrator | ok: [testbed-node-1] 2025-10-08 16:03:36.964867 | orchestrator | ok: [testbed-node-2] 2025-10-08 16:03:36.964873 | orchestrator | 2025-10-08 16:03:36.964880 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2025-10-08 16:03:36.964886 | orchestrator | 2025-10-08 16:03:36.964893 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2025-10-08 16:03:36.964903 | orchestrator | Wednesday 08 October 2025 16:03:30 +0000 (0:00:01.859) 0:08:59.368 ***** 2025-10-08 16:03:36.964909 | orchestrator | changed: [testbed-node-0] 2025-10-08 16:03:36.964916 | orchestrator | changed: [testbed-node-1] 2025-10-08 16:03:36.964922 | orchestrator | changed: [testbed-node-2] 2025-10-08 16:03:36.964928 | orchestrator | 2025-10-08 16:03:36.964935 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2025-10-08 16:03:36.964941 | orchestrator | 2025-10-08 16:03:36.964948 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2025-10-08 16:03:36.964954 | orchestrator | Wednesday 08 October 2025 16:03:31 +0000 (0:00:01.157) 0:09:00.526 ***** 2025-10-08 16:03:36.964960 | orchestrator | skipping: [testbed-node-0] 2025-10-08 16:03:36.964967 | orchestrator | skipping: [testbed-node-1] 2025-10-08 16:03:36.964973 | orchestrator | skipping: [testbed-node-2] 2025-10-08 16:03:36.964980 | orchestrator | 2025-10-08 16:03:36.964986 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2025-10-08 16:03:36.964993 | orchestrator | 2025-10-08 16:03:36.964999 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2025-10-08 16:03:36.965006 | orchestrator | Wednesday 08 October 2025 16:03:32 +0000 (0:00:00.572) 0:09:01.098 ***** 2025-10-08 16:03:36.965012 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2025-10-08 16:03:36.965024 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-10-08 16:03:36.965031 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-10-08 16:03:36.965037 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2025-10-08 16:03:36.965044 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2025-10-08 16:03:36.965050 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2025-10-08 16:03:36.965057 | orchestrator | skipping: [testbed-node-3] 2025-10-08 16:03:36.965063 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2025-10-08 16:03:36.965069 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-10-08 16:03:36.965076 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-10-08 16:03:36.965082 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2025-10-08 16:03:36.965089 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2025-10-08 16:03:36.965095 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2025-10-08 16:03:36.965101 | orchestrator | skipping: [testbed-node-4] 2025-10-08 16:03:36.965108 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2025-10-08 16:03:36.965114 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-10-08 16:03:36.965121 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-10-08 16:03:36.965127 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2025-10-08 16:03:36.965134 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2025-10-08 16:03:36.965140 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2025-10-08 16:03:36.965158 | orchestrator | skipping: [testbed-node-5] 2025-10-08 16:03:36.965165 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2025-10-08 16:03:36.965172 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-10-08 16:03:36.965178 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-10-08 16:03:36.965185 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2025-10-08 16:03:36.965191 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2025-10-08 16:03:36.965197 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2025-10-08 16:03:36.965204 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2025-10-08 16:03:36.965210 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-10-08 16:03:36.965217 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-10-08 16:03:36.965227 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2025-10-08 16:03:36.965233 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2025-10-08 16:03:36.965240 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2025-10-08 16:03:36.965246 | orchestrator | skipping: [testbed-node-0] 2025-10-08 16:03:36.965253 | orchestrator | skipping: [testbed-node-1] 2025-10-08 16:03:36.965259 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2025-10-08 16:03:36.965266 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-10-08 16:03:36.965272 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-10-08 16:03:36.965278 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2025-10-08 16:03:36.965285 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2025-10-08 16:03:36.965291 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2025-10-08 16:03:36.965298 | orchestrator | skipping: [testbed-node-2] 2025-10-08 16:03:36.965304 | orchestrator | 2025-10-08 16:03:36.965310 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2025-10-08 16:03:36.965317 | orchestrator | 2025-10-08 16:03:36.965323 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2025-10-08 16:03:36.965330 | orchestrator | Wednesday 08 October 2025 16:03:33 +0000 (0:00:01.363) 0:09:02.461 ***** 2025-10-08 16:03:36.965342 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2025-10-08 16:03:36.965348 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2025-10-08 16:03:36.965355 | orchestrator | skipping: [testbed-node-0] 2025-10-08 16:03:36.965361 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2025-10-08 16:03:36.965367 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2025-10-08 16:03:36.965374 | orchestrator | skipping: [testbed-node-1] 2025-10-08 16:03:36.965380 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2025-10-08 16:03:36.965387 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2025-10-08 16:03:36.965397 | orchestrator | skipping: [testbed-node-2] 2025-10-08 16:03:36.965403 | orchestrator | 2025-10-08 16:03:36.965410 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2025-10-08 16:03:36.965416 | orchestrator | 2025-10-08 16:03:36.965423 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2025-10-08 16:03:36.965429 | orchestrator | Wednesday 08 October 2025 16:03:34 +0000 (0:00:00.796) 0:09:03.258 ***** 2025-10-08 16:03:36.965436 | orchestrator | skipping: [testbed-node-0] 2025-10-08 16:03:36.965442 | orchestrator | 2025-10-08 16:03:36.965449 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2025-10-08 16:03:36.965455 | orchestrator | 2025-10-08 16:03:36.965461 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2025-10-08 16:03:36.965468 | orchestrator | Wednesday 08 October 2025 16:03:34 +0000 (0:00:00.664) 0:09:03.922 ***** 2025-10-08 16:03:36.965474 | orchestrator | skipping: [testbed-node-0] 2025-10-08 16:03:36.965481 | orchestrator | skipping: [testbed-node-1] 2025-10-08 16:03:36.965487 | orchestrator | skipping: [testbed-node-2] 2025-10-08 16:03:36.965493 | orchestrator | 2025-10-08 16:03:36.965500 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-08 16:03:36.965507 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-08 16:03:36.965513 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2025-10-08 16:03:36.965520 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-10-08 16:03:36.965526 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-10-08 16:03:36.965533 | orchestrator | testbed-node-3 : ok=38  changed=27  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-10-08 16:03:36.965539 | orchestrator | testbed-node-4 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2025-10-08 16:03:36.965546 | orchestrator | testbed-node-5 : ok=42  changed=27  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2025-10-08 16:03:36.965552 | orchestrator | 2025-10-08 16:03:36.965559 | orchestrator | 2025-10-08 16:03:36.965565 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-08 16:03:36.965572 | orchestrator | Wednesday 08 October 2025 16:03:35 +0000 (0:00:00.483) 0:09:04.406 ***** 2025-10-08 16:03:36.965578 | orchestrator | =============================================================================== 2025-10-08 16:03:36.965585 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 31.31s 2025-10-08 16:03:36.965591 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 30.37s 2025-10-08 16:03:36.965597 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 24.95s 2025-10-08 16:03:36.965604 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 24.19s 2025-10-08 16:03:36.965617 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 22.88s 2025-10-08 16:03:36.965624 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 22.19s 2025-10-08 16:03:36.965630 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 20.30s 2025-10-08 16:03:36.965640 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 18.66s 2025-10-08 16:03:36.965646 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 17.78s 2025-10-08 16:03:36.965653 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 16.01s 2025-10-08 16:03:36.965659 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 15.22s 2025-10-08 16:03:36.965666 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 13.85s 2025-10-08 16:03:36.965672 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 13.82s 2025-10-08 16:03:36.965678 | orchestrator | nova-cell : Create cell ------------------------------------------------ 13.19s 2025-10-08 16:03:36.965685 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 13.16s 2025-10-08 16:03:36.965691 | orchestrator | nova : Restart nova-api container -------------------------------------- 11.35s 2025-10-08 16:03:36.965697 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------ 10.64s 2025-10-08 16:03:36.965704 | orchestrator | nova-cell : Copying files for nova-ssh ---------------------------------- 9.21s 2025-10-08 16:03:36.965710 | orchestrator | service-ks-register : nova | Granting user roles ------------------------ 9.05s 2025-10-08 16:03:36.965717 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 8.60s 2025-10-08 16:03:36.965723 | orchestrator | 2025-10-08 16:03:36 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:03:40.006510 | orchestrator | 2025-10-08 16:03:40 | INFO  | Task c9acb2ec-d69c-41f2-b09a-cfb2bdbcdae2 is in state STARTED 2025-10-08 16:03:40.008710 | orchestrator | 2025-10-08 16:03:40 | INFO  | Task 9743aaeb-e817-41b7-b0b1-71e5050ad45c is in state STARTED 2025-10-08 16:03:40.011396 | orchestrator | 2025-10-08 16:03:40 | INFO  | Task 7b93d838-8799-4349-ac0d-2e6e54b93fcc is in state STARTED 2025-10-08 16:03:40.011424 | orchestrator | 2025-10-08 16:03:40 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:03:43.055629 | orchestrator | 2025-10-08 16:03:43 | INFO  | Task c9acb2ec-d69c-41f2-b09a-cfb2bdbcdae2 is in state STARTED 2025-10-08 16:03:43.057278 | orchestrator | 2025-10-08 16:03:43 | INFO  | Task 9743aaeb-e817-41b7-b0b1-71e5050ad45c is in state STARTED 2025-10-08 16:03:43.059309 | orchestrator | 2025-10-08 16:03:43 | INFO  | Task 7b93d838-8799-4349-ac0d-2e6e54b93fcc is in state STARTED 2025-10-08 16:03:43.059501 | orchestrator | 2025-10-08 16:03:43 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:03:46.097026 | orchestrator | 2025-10-08 16:03:46 | INFO  | Task c9acb2ec-d69c-41f2-b09a-cfb2bdbcdae2 is in state STARTED 2025-10-08 16:03:46.099018 | orchestrator | 2025-10-08 16:03:46 | INFO  | Task 9743aaeb-e817-41b7-b0b1-71e5050ad45c is in state STARTED 2025-10-08 16:03:46.100393 | orchestrator | 2025-10-08 16:03:46 | INFO  | Task 7b93d838-8799-4349-ac0d-2e6e54b93fcc is in state STARTED 2025-10-08 16:03:46.100418 | orchestrator | 2025-10-08 16:03:46 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:03:49.146643 | orchestrator | 2025-10-08 16:03:49 | INFO  | Task c9acb2ec-d69c-41f2-b09a-cfb2bdbcdae2 is in state STARTED 2025-10-08 16:03:49.146727 | orchestrator | 2025-10-08 16:03:49 | INFO  | Task 9743aaeb-e817-41b7-b0b1-71e5050ad45c is in state STARTED 2025-10-08 16:03:49.148288 | orchestrator | 2025-10-08 16:03:49 | INFO  | Task 7b93d838-8799-4349-ac0d-2e6e54b93fcc is in state STARTED 2025-10-08 16:03:49.148309 | orchestrator | 2025-10-08 16:03:49 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:03:52.194966 | orchestrator | 2025-10-08 16:03:52 | INFO  | Task c9acb2ec-d69c-41f2-b09a-cfb2bdbcdae2 is in state STARTED 2025-10-08 16:03:52.199000 | orchestrator | 2025-10-08 16:03:52 | INFO  | Task 9743aaeb-e817-41b7-b0b1-71e5050ad45c is in state STARTED 2025-10-08 16:03:52.202533 | orchestrator | 2025-10-08 16:03:52 | INFO  | Task 7b93d838-8799-4349-ac0d-2e6e54b93fcc is in state STARTED 2025-10-08 16:03:52.202765 | orchestrator | 2025-10-08 16:03:52 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:03:55.244438 | orchestrator | 2025-10-08 16:03:55 | INFO  | Task c9acb2ec-d69c-41f2-b09a-cfb2bdbcdae2 is in state STARTED 2025-10-08 16:03:55.244855 | orchestrator | 2025-10-08 16:03:55 | INFO  | Task 9743aaeb-e817-41b7-b0b1-71e5050ad45c is in state STARTED 2025-10-08 16:03:55.246198 | orchestrator | 2025-10-08 16:03:55 | INFO  | Task 7b93d838-8799-4349-ac0d-2e6e54b93fcc is in state STARTED 2025-10-08 16:03:55.246225 | orchestrator | 2025-10-08 16:03:55 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:03:58.294309 | orchestrator | 2025-10-08 16:03:58 | INFO  | Task c9acb2ec-d69c-41f2-b09a-cfb2bdbcdae2 is in state STARTED 2025-10-08 16:03:58.295678 | orchestrator | 2025-10-08 16:03:58 | INFO  | Task 9743aaeb-e817-41b7-b0b1-71e5050ad45c is in state STARTED 2025-10-08 16:03:58.297364 | orchestrator | 2025-10-08 16:03:58 | INFO  | Task 7b93d838-8799-4349-ac0d-2e6e54b93fcc is in state STARTED 2025-10-08 16:03:58.297575 | orchestrator | 2025-10-08 16:03:58 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:04:01.346134 | orchestrator | 2025-10-08 16:04:01 | INFO  | Task c9acb2ec-d69c-41f2-b09a-cfb2bdbcdae2 is in state STARTED 2025-10-08 16:04:01.349375 | orchestrator | 2025-10-08 16:04:01 | INFO  | Task 9743aaeb-e817-41b7-b0b1-71e5050ad45c is in state STARTED 2025-10-08 16:04:01.351973 | orchestrator | 2025-10-08 16:04:01 | INFO  | Task 7b93d838-8799-4349-ac0d-2e6e54b93fcc is in state STARTED 2025-10-08 16:04:01.352000 | orchestrator | 2025-10-08 16:04:01 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:04:04.396506 | orchestrator | 2025-10-08 16:04:04 | INFO  | Task c9acb2ec-d69c-41f2-b09a-cfb2bdbcdae2 is in state STARTED 2025-10-08 16:04:04.398383 | orchestrator | 2025-10-08 16:04:04 | INFO  | Task 9743aaeb-e817-41b7-b0b1-71e5050ad45c is in state STARTED 2025-10-08 16:04:04.400310 | orchestrator | 2025-10-08 16:04:04 | INFO  | Task 7b93d838-8799-4349-ac0d-2e6e54b93fcc is in state STARTED 2025-10-08 16:04:04.400596 | orchestrator | 2025-10-08 16:04:04 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:04:07.438980 | orchestrator | 2025-10-08 16:04:07 | INFO  | Task c9acb2ec-d69c-41f2-b09a-cfb2bdbcdae2 is in state STARTED 2025-10-08 16:04:07.439100 | orchestrator | 2025-10-08 16:04:07 | INFO  | Task 9743aaeb-e817-41b7-b0b1-71e5050ad45c is in state STARTED 2025-10-08 16:04:07.440225 | orchestrator | 2025-10-08 16:04:07 | INFO  | Task 7b93d838-8799-4349-ac0d-2e6e54b93fcc is in state STARTED 2025-10-08 16:04:07.440253 | orchestrator | 2025-10-08 16:04:07 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:04:10.489772 | orchestrator | 2025-10-08 16:04:10 | INFO  | Task c9acb2ec-d69c-41f2-b09a-cfb2bdbcdae2 is in state STARTED 2025-10-08 16:04:10.492860 | orchestrator | 2025-10-08 16:04:10 | INFO  | Task 9743aaeb-e817-41b7-b0b1-71e5050ad45c is in state STARTED 2025-10-08 16:04:10.494689 | orchestrator | 2025-10-08 16:04:10 | INFO  | Task 7b93d838-8799-4349-ac0d-2e6e54b93fcc is in state STARTED 2025-10-08 16:04:10.494719 | orchestrator | 2025-10-08 16:04:10 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:04:13.545426 | orchestrator | 2025-10-08 16:04:13 | INFO  | Task c9acb2ec-d69c-41f2-b09a-cfb2bdbcdae2 is in state STARTED 2025-10-08 16:04:13.546462 | orchestrator | 2025-10-08 16:04:13 | INFO  | Task 9743aaeb-e817-41b7-b0b1-71e5050ad45c is in state STARTED 2025-10-08 16:04:13.548012 | orchestrator | 2025-10-08 16:04:13 | INFO  | Task 7b93d838-8799-4349-ac0d-2e6e54b93fcc is in state STARTED 2025-10-08 16:04:13.548050 | orchestrator | 2025-10-08 16:04:13 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:04:16.593489 | orchestrator | 2025-10-08 16:04:16 | INFO  | Task c9acb2ec-d69c-41f2-b09a-cfb2bdbcdae2 is in state STARTED 2025-10-08 16:04:16.595249 | orchestrator | 2025-10-08 16:04:16 | INFO  | Task 9743aaeb-e817-41b7-b0b1-71e5050ad45c is in state STARTED 2025-10-08 16:04:16.597124 | orchestrator | 2025-10-08 16:04:16 | INFO  | Task 7b93d838-8799-4349-ac0d-2e6e54b93fcc is in state STARTED 2025-10-08 16:04:16.597177 | orchestrator | 2025-10-08 16:04:16 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:04:19.642688 | orchestrator | 2025-10-08 16:04:19 | INFO  | Task c9acb2ec-d69c-41f2-b09a-cfb2bdbcdae2 is in state STARTED 2025-10-08 16:04:19.645238 | orchestrator | 2025-10-08 16:04:19 | INFO  | Task 9743aaeb-e817-41b7-b0b1-71e5050ad45c is in state STARTED 2025-10-08 16:04:19.647206 | orchestrator | 2025-10-08 16:04:19 | INFO  | Task 7b93d838-8799-4349-ac0d-2e6e54b93fcc is in state STARTED 2025-10-08 16:04:19.647245 | orchestrator | 2025-10-08 16:04:19 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:04:22.699724 | orchestrator | 2025-10-08 16:04:22 | INFO  | Task c9acb2ec-d69c-41f2-b09a-cfb2bdbcdae2 is in state STARTED 2025-10-08 16:04:22.701705 | orchestrator | 2025-10-08 16:04:22 | INFO  | Task 9743aaeb-e817-41b7-b0b1-71e5050ad45c is in state STARTED 2025-10-08 16:04:22.703350 | orchestrator | 2025-10-08 16:04:22 | INFO  | Task 7b93d838-8799-4349-ac0d-2e6e54b93fcc is in state STARTED 2025-10-08 16:04:22.703462 | orchestrator | 2025-10-08 16:04:22 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:04:25.752691 | orchestrator | 2025-10-08 16:04:25 | INFO  | Task c9acb2ec-d69c-41f2-b09a-cfb2bdbcdae2 is in state STARTED 2025-10-08 16:04:25.752799 | orchestrator | 2025-10-08 16:04:25 | INFO  | Task 9743aaeb-e817-41b7-b0b1-71e5050ad45c is in state STARTED 2025-10-08 16:04:25.753933 | orchestrator | 2025-10-08 16:04:25 | INFO  | Task 7b93d838-8799-4349-ac0d-2e6e54b93fcc is in state STARTED 2025-10-08 16:04:25.753959 | orchestrator | 2025-10-08 16:04:25 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:04:28.805743 | orchestrator | 2025-10-08 16:04:28 | INFO  | Task c9acb2ec-d69c-41f2-b09a-cfb2bdbcdae2 is in state STARTED 2025-10-08 16:04:28.807330 | orchestrator | 2025-10-08 16:04:28 | INFO  | Task 9743aaeb-e817-41b7-b0b1-71e5050ad45c is in state STARTED 2025-10-08 16:04:28.809400 | orchestrator | 2025-10-08 16:04:28 | INFO  | Task 7b93d838-8799-4349-ac0d-2e6e54b93fcc is in state STARTED 2025-10-08 16:04:28.809440 | orchestrator | 2025-10-08 16:04:28 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:04:31.854910 | orchestrator | 2025-10-08 16:04:31 | INFO  | Task c9acb2ec-d69c-41f2-b09a-cfb2bdbcdae2 is in state STARTED 2025-10-08 16:04:31.857623 | orchestrator | 2025-10-08 16:04:31 | INFO  | Task 9743aaeb-e817-41b7-b0b1-71e5050ad45c is in state STARTED 2025-10-08 16:04:31.860197 | orchestrator | 2025-10-08 16:04:31 | INFO  | Task 7b93d838-8799-4349-ac0d-2e6e54b93fcc is in state STARTED 2025-10-08 16:04:31.860294 | orchestrator | 2025-10-08 16:04:31 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:04:34.908258 | orchestrator | 2025-10-08 16:04:34 | INFO  | Task c9acb2ec-d69c-41f2-b09a-cfb2bdbcdae2 is in state STARTED 2025-10-08 16:04:34.910087 | orchestrator | 2025-10-08 16:04:34 | INFO  | Task 9743aaeb-e817-41b7-b0b1-71e5050ad45c is in state STARTED 2025-10-08 16:04:34.912750 | orchestrator | 2025-10-08 16:04:34 | INFO  | Task 7b93d838-8799-4349-ac0d-2e6e54b93fcc is in state STARTED 2025-10-08 16:04:34.912776 | orchestrator | 2025-10-08 16:04:34 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:04:37.964536 | orchestrator | 2025-10-08 16:04:37 | INFO  | Task c9acb2ec-d69c-41f2-b09a-cfb2bdbcdae2 is in state STARTED 2025-10-08 16:04:37.965830 | orchestrator | 2025-10-08 16:04:37 | INFO  | Task 9743aaeb-e817-41b7-b0b1-71e5050ad45c is in state STARTED 2025-10-08 16:04:37.967123 | orchestrator | 2025-10-08 16:04:37 | INFO  | Task 7b93d838-8799-4349-ac0d-2e6e54b93fcc is in state STARTED 2025-10-08 16:04:37.967263 | orchestrator | 2025-10-08 16:04:37 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:04:41.021063 | orchestrator | 2025-10-08 16:04:41 | INFO  | Task c9acb2ec-d69c-41f2-b09a-cfb2bdbcdae2 is in state STARTED 2025-10-08 16:04:41.022707 | orchestrator | 2025-10-08 16:04:41 | INFO  | Task 9743aaeb-e817-41b7-b0b1-71e5050ad45c is in state STARTED 2025-10-08 16:04:41.025420 | orchestrator | 2025-10-08 16:04:41 | INFO  | Task 7b93d838-8799-4349-ac0d-2e6e54b93fcc is in state STARTED 2025-10-08 16:04:41.025919 | orchestrator | 2025-10-08 16:04:41 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:04:44.073432 | orchestrator | 2025-10-08 16:04:44 | INFO  | Task c9acb2ec-d69c-41f2-b09a-cfb2bdbcdae2 is in state STARTED 2025-10-08 16:04:44.074948 | orchestrator | 2025-10-08 16:04:44 | INFO  | Task 9743aaeb-e817-41b7-b0b1-71e5050ad45c is in state STARTED 2025-10-08 16:04:44.076377 | orchestrator | 2025-10-08 16:04:44 | INFO  | Task 7b93d838-8799-4349-ac0d-2e6e54b93fcc is in state STARTED 2025-10-08 16:04:44.076406 | orchestrator | 2025-10-08 16:04:44 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:04:47.116007 | orchestrator | 2025-10-08 16:04:47 | INFO  | Task c9acb2ec-d69c-41f2-b09a-cfb2bdbcdae2 is in state STARTED 2025-10-08 16:04:47.117321 | orchestrator | 2025-10-08 16:04:47 | INFO  | Task 9743aaeb-e817-41b7-b0b1-71e5050ad45c is in state STARTED 2025-10-08 16:04:47.119107 | orchestrator | 2025-10-08 16:04:47 | INFO  | Task 7b93d838-8799-4349-ac0d-2e6e54b93fcc is in state STARTED 2025-10-08 16:04:47.119135 | orchestrator | 2025-10-08 16:04:47 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:04:50.167005 | orchestrator | 2025-10-08 16:04:50 | INFO  | Task c9acb2ec-d69c-41f2-b09a-cfb2bdbcdae2 is in state STARTED 2025-10-08 16:04:50.168028 | orchestrator | 2025-10-08 16:04:50 | INFO  | Task 9743aaeb-e817-41b7-b0b1-71e5050ad45c is in state STARTED 2025-10-08 16:04:50.170286 | orchestrator | 2025-10-08 16:04:50 | INFO  | Task 7b93d838-8799-4349-ac0d-2e6e54b93fcc is in state STARTED 2025-10-08 16:04:50.170320 | orchestrator | 2025-10-08 16:04:50 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:04:53.219514 | orchestrator | 2025-10-08 16:04:53 | INFO  | Task c9acb2ec-d69c-41f2-b09a-cfb2bdbcdae2 is in state STARTED 2025-10-08 16:04:53.224834 | orchestrator | 2025-10-08 16:04:53 | INFO  | Task 9743aaeb-e817-41b7-b0b1-71e5050ad45c is in state SUCCESS 2025-10-08 16:04:53.227925 | orchestrator | 2025-10-08 16:04:53.227969 | orchestrator | 2025-10-08 16:04:53.227983 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-10-08 16:04:53.227996 | orchestrator | 2025-10-08 16:04:53.228008 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-10-08 16:04:53.228020 | orchestrator | Wednesday 08 October 2025 16:02:26 +0000 (0:00:00.270) 0:00:00.270 ***** 2025-10-08 16:04:53.228033 | orchestrator | ok: [testbed-node-0] 2025-10-08 16:04:53.228071 | orchestrator | ok: [testbed-node-1] 2025-10-08 16:04:53.228083 | orchestrator | ok: [testbed-node-2] 2025-10-08 16:04:53.228095 | orchestrator | 2025-10-08 16:04:53.228107 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-10-08 16:04:53.228118 | orchestrator | Wednesday 08 October 2025 16:02:26 +0000 (0:00:00.268) 0:00:00.539 ***** 2025-10-08 16:04:53.228129 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2025-10-08 16:04:53.228141 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2025-10-08 16:04:53.228152 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2025-10-08 16:04:53.228194 | orchestrator | 2025-10-08 16:04:53.228205 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2025-10-08 16:04:53.228217 | orchestrator | 2025-10-08 16:04:53.228228 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-10-08 16:04:53.228705 | orchestrator | Wednesday 08 October 2025 16:02:26 +0000 (0:00:00.368) 0:00:00.907 ***** 2025-10-08 16:04:53.228735 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-08 16:04:53.228756 | orchestrator | 2025-10-08 16:04:53.228776 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2025-10-08 16:04:53.228790 | orchestrator | Wednesday 08 October 2025 16:02:27 +0000 (0:00:00.468) 0:00:01.376 ***** 2025-10-08 16:04:53.228822 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-10-08 16:04:53.228839 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-10-08 16:04:53.228851 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-10-08 16:04:53.228863 | orchestrator | 2025-10-08 16:04:53.228873 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2025-10-08 16:04:53.228884 | orchestrator | Wednesday 08 October 2025 16:02:27 +0000 (0:00:00.715) 0:00:02.091 ***** 2025-10-08 16:04:53.228896 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2025-10-08 16:04:53.228907 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2025-10-08 16:04:53.228919 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-10-08 16:04:53.228943 | orchestrator | 2025-10-08 16:04:53.228954 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-10-08 16:04:53.228965 | orchestrator | Wednesday 08 October 2025 16:02:28 +0000 (0:00:00.606) 0:00:02.698 ***** 2025-10-08 16:04:53.228976 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-08 16:04:53.228988 | orchestrator | 2025-10-08 16:04:53.228998 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2025-10-08 16:04:53.229009 | orchestrator | Wednesday 08 October 2025 16:02:29 +0000 (0:00:00.610) 0:00:03.309 ***** 2025-10-08 16:04:53.229034 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-10-08 16:04:53.229047 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-10-08 16:04:53.229474 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-10-08 16:04:53.229493 | orchestrator | 2025-10-08 16:04:53.229505 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2025-10-08 16:04:53.229516 | orchestrator | Wednesday 08 October 2025 16:02:30 +0000 (0:00:01.490) 0:00:04.799 ***** 2025-10-08 16:04:53.229527 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-10-08 16:04:53.229539 | orchestrator | skipping: [testbed-node-0] 2025-10-08 16:04:53.229551 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-10-08 16:04:53.229575 | orchestrator | skipping: [testbed-node-1] 2025-10-08 16:04:53.229627 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-10-08 16:04:53.229640 | orchestrator | skipping: [testbed-node-2] 2025-10-08 16:04:53.229651 | orchestrator | 2025-10-08 16:04:53.229662 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2025-10-08 16:04:53.229673 | orchestrator | Wednesday 08 October 2025 16:02:30 +0000 (0:00:00.338) 0:00:05.138 ***** 2025-10-08 16:04:53.229685 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-10-08 16:04:53.229702 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-10-08 16:04:53.229713 | orchestrator | skipping: [testbed-node-0] 2025-10-08 16:04:53.229725 | orchestrator | skipping: [testbed-node-1] 2025-10-08 16:04:53.229736 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-10-08 16:04:53.229747 | orchestrator | skipping: [testbed-node-2] 2025-10-08 16:04:53.229758 | orchestrator | 2025-10-08 16:04:53.229769 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2025-10-08 16:04:53.229780 | orchestrator | Wednesday 08 October 2025 16:02:31 +0000 (0:00:00.668) 0:00:05.806 ***** 2025-10-08 16:04:53.229791 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-10-08 16:04:53.229810 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-10-08 16:04:53.229853 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-10-08 16:04:53.229867 | orchestrator | 2025-10-08 16:04:53.229877 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2025-10-08 16:04:53.229888 | orchestrator | Wednesday 08 October 2025 16:02:32 +0000 (0:00:01.288) 0:00:07.095 ***** 2025-10-08 16:04:53.229899 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-10-08 16:04:53.229916 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-10-08 16:04:53.229928 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-10-08 16:04:53.229946 | orchestrator | 2025-10-08 16:04:53.229958 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2025-10-08 16:04:53.229969 | orchestrator | Wednesday 08 October 2025 16:02:34 +0000 (0:00:01.335) 0:00:08.430 ***** 2025-10-08 16:04:53.229979 | orchestrator | skipping: [testbed-node-0] 2025-10-08 16:04:53.229990 | orchestrator | skipping: [testbed-node-1] 2025-10-08 16:04:53.230001 | orchestrator | skipping: [testbed-node-2] 2025-10-08 16:04:53.230012 | orchestrator | 2025-10-08 16:04:53.230076 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2025-10-08 16:04:53.230090 | orchestrator | Wednesday 08 October 2025 16:02:34 +0000 (0:00:00.402) 0:00:08.833 ***** 2025-10-08 16:04:53.230102 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-10-08 16:04:53.230115 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-10-08 16:04:53.230127 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-10-08 16:04:53.230139 | orchestrator | 2025-10-08 16:04:53.230151 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2025-10-08 16:04:53.230223 | orchestrator | Wednesday 08 October 2025 16:02:35 +0000 (0:00:01.206) 0:00:10.040 ***** 2025-10-08 16:04:53.230235 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-10-08 16:04:53.230248 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-10-08 16:04:53.230260 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-10-08 16:04:53.230272 | orchestrator | 2025-10-08 16:04:53.230284 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2025-10-08 16:04:53.230298 | orchestrator | Wednesday 08 October 2025 16:02:37 +0000 (0:00:01.234) 0:00:11.274 ***** 2025-10-08 16:04:53.230346 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-10-08 16:04:53.230362 | orchestrator | 2025-10-08 16:04:53.230374 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2025-10-08 16:04:53.230386 | orchestrator | Wednesday 08 October 2025 16:02:37 +0000 (0:00:00.782) 0:00:12.057 ***** 2025-10-08 16:04:53.230398 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2025-10-08 16:04:53.230410 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2025-10-08 16:04:53.230423 | orchestrator | ok: [testbed-node-0] 2025-10-08 16:04:53.230437 | orchestrator | ok: [testbed-node-1] 2025-10-08 16:04:53.230448 | orchestrator | ok: [testbed-node-2] 2025-10-08 16:04:53.230459 | orchestrator | 2025-10-08 16:04:53.230470 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2025-10-08 16:04:53.230481 | orchestrator | Wednesday 08 October 2025 16:02:38 +0000 (0:00:00.750) 0:00:12.807 ***** 2025-10-08 16:04:53.230492 | orchestrator | skipping: [testbed-node-0] 2025-10-08 16:04:53.230502 | orchestrator | skipping: [testbed-node-1] 2025-10-08 16:04:53.230513 | orchestrator | skipping: [testbed-node-2] 2025-10-08 16:04:53.230524 | orchestrator | 2025-10-08 16:04:53.230535 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2025-10-08 16:04:53.230546 | orchestrator | Wednesday 08 October 2025 16:02:39 +0000 (0:00:00.510) 0:00:13.318 ***** 2025-10-08 16:04:53.230564 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1092777, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.7760253, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-08 16:04:53.230585 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1092777, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.7760253, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-08 16:04:53.230596 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1092777, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.7760253, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-08 16:04:53.230609 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1092850, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.7905946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-08 16:04:53.230650 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1092850, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.7905946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-08 16:04:53.230664 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1092850, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.7905946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-08 16:04:53.230675 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1092800, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.7799315, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-08 16:04:53.230699 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1092800, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.7799315, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-08 16:04:53.230711 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1092800, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.7799315, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-08 16:04:53.230722 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1092855, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.7928483, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-08 16:04:53.230733 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1092855, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.7928483, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-08 16:04:53.230777 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1092855, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.7928483, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-08 16:04:53.230798 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1092820, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.784601, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-08 16:04:53.230842 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1092820, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.784601, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-08 16:04:53.230862 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1092820, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.784601, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-08 16:04:53.230880 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1092839, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.7882233, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-08 16:04:53.230898 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1092839, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.7882233, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-08 16:04:53.230964 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1092839, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.7882233, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-08 16:04:53.230991 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1092775, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.7735007, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-08 16:04:53.231032 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1092775, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.7735007, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-08 16:04:53.231053 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1092775, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.7735007, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-08 16:04:53.231072 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1092792, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.776889, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-08 16:04:53.231091 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1092792, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.776889, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-08 16:04:53.231111 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1092792, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.776889, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-08 16:04:53.231261 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1092803, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.7808774, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-08 16:04:53.231291 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1092803, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.7808774, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-08 16:04:53.231333 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1092803, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.7808774, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-08 16:04:53.231354 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1092827, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.785217, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-08 16:04:53.231373 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1092827, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.785217, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-08 16:04:53.231392 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1092827, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.785217, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-08 16:04:53.231464 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1092846, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.789437, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-08 16:04:53.231486 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1092846, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.789437, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-08 16:04:53.231525 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1092846, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.789437, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-08 16:04:53.231545 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1092793, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.7788482, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-08 16:04:53.231564 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1092793, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.7788482, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-08 16:04:53.231583 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1092793, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.7788482, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-08 16:04:53.231609 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1092832, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.787496, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-08 16:04:53.231630 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1092832, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.787496, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-08 16:04:53.231663 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1092832, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.787496, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-08 16:04:53.231688 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1092823, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.785217, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-08 16:04:53.231707 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1092823, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.785217, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-08 16:04:53.231726 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1092823, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.785217, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-08 16:04:53.231743 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1092815, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.7838795, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-08 16:04:53.231771 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1092815, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.7838795, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-08 16:04:53.231798 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1092815, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.7838795, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-08 16:04:53.231819 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1092812, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.7818482, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-08 16:04:53.231836 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1092812, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.7818482, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-08 16:04:53.231852 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1092812, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.7818482, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-08 16:04:53.231869 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1092831, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.7859445, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-08 16:04:53.231894 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1092831, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.7859445, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-08 16:04:53.231921 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1092831, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.7859445, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-08 16:04:53.231944 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1092809, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.7818248, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-08 16:04:53.231961 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1092809, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.7818248, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-08 16:04:53.231978 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1092809, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.7818248, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-08 16:04:53.231995 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1092844, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.7888505, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-08 16:04:53.232020 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1092844, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.7888505, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-08 16:04:53.232045 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1092844, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.7888505, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-08 16:04:53.232062 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1092984, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8249533, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-08 16:04:53.232085 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1092984, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8249533, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-08 16:04:53.232099 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1092984, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8249533, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-08 16:04:53.232109 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1092911, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8075273, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-08 16:04:53.232119 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1092911, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8075273, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-08 16:04:53.232148 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1092911, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8075273, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-08 16:04:53.232189 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1092893, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.801099, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-08 16:04:53.232205 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1092893, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.801099, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-08 16:04:53.232216 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1092893, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.801099, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-08 16:04:53.232226 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1092930, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8094294, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-08 16:04:53.232236 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1092930, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8094294, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-08 16:04:53.232259 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1092930, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8094294, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-08 16:04:53.232270 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1092869, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.7950916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-08 16:04:53.232285 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1092869, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.7950916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-08 16:04:53.232296 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1092869, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.7950916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-08 16:04:53.232306 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1092954, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.817951, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-08 16:04:53.232316 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1092954, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.817951, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-08 16:04:53.232338 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1092954, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.817951, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-08 16:04:53.232348 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1092931, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8158486, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-08 16:04:53.232363 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1092931, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8158486, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-08 16:04:53.232373 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1092931, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8158486, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-08 16:04:53.232383 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1092957, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8181946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-08 16:04:53.232393 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1092957, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8181946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-08 16:04:53.232414 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1092957, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8181946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-08 16:04:53.232425 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1092979, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8241367, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-08 16:04:53.232439 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1092979, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8241367, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-08 16:04:53.232449 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1092979, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8241367, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-08 16:04:53.232459 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1092951, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8171206, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-08 16:04:53.232469 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1092951, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8171206, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-08 16:04:53.232486 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1092951, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8171206, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-08 16:04:53.232502 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1092927, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8081858, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-08 16:04:53.232512 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1092927, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8081858, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-08 16:04:53.232527 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1092927, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8081858, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-08 16:04:53.232537 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1092905, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8043265, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-08 16:04:53.232547 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1092905, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8043265, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-08 16:04:53.232565 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1092905, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8043265, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-08 16:04:53.232591 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1092924, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8081858, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-08 16:04:53.232608 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1092924, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8081858, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-08 16:04:53.232630 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1092924, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8081858, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-08 16:04:53.232647 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1092897, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8028483, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-08 16:04:53.232666 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1092897, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8028483, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-08 16:04:53.232685 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1092897, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8028483, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-08 16:04:53.232706 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1092929, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8089757, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-08 16:04:53.232723 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1092929, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8089757, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-08 16:04:53.232745 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1092929, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8089757, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-08 16:04:53.232762 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1092967, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8228486, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-08 16:04:53.232779 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1092967, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8228486, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-08 16:04:53.232807 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1092967, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8228486, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-08 16:04:53.232824 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1092962, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8203926, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-08 16:04:53.232841 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1092962, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8203926, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-08 16:04:53.232869 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1092962, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8203926, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-08 16:04:53.232886 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1092872, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.7961223, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-08 16:04:53.232902 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1092872, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.7961223, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-08 16:04:53.232932 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1092872, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.7961223, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-08 16:04:53.232956 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1092874, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8007236, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-08 16:04:53.232974 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1092874, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8007236, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-08 16:04:53.232992 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1092874, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8007236, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-08 16:04:53.233015 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1092947, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8158486, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-08 16:04:53.233032 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1092947, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8158486, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-08 16:04:53.233058 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1092947, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8158486, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-08 16:04:53.233076 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1092960, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8185084, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-08 16:04:53.233101 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1092960, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8185084, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-08 16:04:53.233119 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1092960, 'dev': 132, 'nlink': 1, 'atime': 1759881754.0, 'mtime': 1759881754.0, 'ctime': 1759936278.8185084, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-08 16:04:53.233136 | orchestrator | 2025-10-08 16:04:53.233150 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2025-10-08 16:04:53.233188 | orchestrator | Wednesday 08 October 2025 16:03:18 +0000 (0:00:39.561) 0:00:52.880 ***** 2025-10-08 16:04:53.233199 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-10-08 16:04:53.233215 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-10-08 16:04:53.233225 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-10-08 16:04:53.233235 | orchestrator | 2025-10-08 16:04:53.233245 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2025-10-08 16:04:53.233255 | orchestrator | Wednesday 08 October 2025 16:03:19 +0000 (0:00:01.013) 0:00:53.894 ***** 2025-10-08 16:04:53.233265 | orchestrator | changed: [testbed-node-0] 2025-10-08 16:04:53.233275 | orchestrator | 2025-10-08 16:04:53.233284 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2025-10-08 16:04:53.233294 | orchestrator | Wednesday 08 October 2025 16:03:22 +0000 (0:00:02.330) 0:00:56.225 ***** 2025-10-08 16:04:53.233304 | orchestrator | changed: [testbed-node-0] 2025-10-08 16:04:53.233313 | orchestrator | 2025-10-08 16:04:53.233323 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-10-08 16:04:53.233332 | orchestrator | Wednesday 08 October 2025 16:03:24 +0000 (0:00:02.344) 0:00:58.569 ***** 2025-10-08 16:04:53.233342 | orchestrator | 2025-10-08 16:04:53.233352 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-10-08 16:04:53.233367 | orchestrator | Wednesday 08 October 2025 16:03:24 +0000 (0:00:00.078) 0:00:58.648 ***** 2025-10-08 16:04:53.233377 | orchestrator | 2025-10-08 16:04:53.233386 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-10-08 16:04:53.233396 | orchestrator | Wednesday 08 October 2025 16:03:24 +0000 (0:00:00.060) 0:00:58.708 ***** 2025-10-08 16:04:53.233406 | orchestrator | 2025-10-08 16:04:53.233415 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2025-10-08 16:04:53.233425 | orchestrator | Wednesday 08 October 2025 16:03:24 +0000 (0:00:00.233) 0:00:58.941 ***** 2025-10-08 16:04:53.233434 | orchestrator | skipping: [testbed-node-1] 2025-10-08 16:04:53.233444 | orchestrator | skipping: [testbed-node-2] 2025-10-08 16:04:53.233454 | orchestrator | changed: [testbed-node-0] 2025-10-08 16:04:53.233463 | orchestrator | 2025-10-08 16:04:53.233473 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2025-10-08 16:04:53.233483 | orchestrator | Wednesday 08 October 2025 16:03:31 +0000 (0:00:06.925) 0:01:05.867 ***** 2025-10-08 16:04:53.233492 | orchestrator | skipping: [testbed-node-1] 2025-10-08 16:04:53.233502 | orchestrator | skipping: [testbed-node-2] 2025-10-08 16:04:53.233512 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2025-10-08 16:04:53.233522 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2025-10-08 16:04:53.233538 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2025-10-08 16:04:53.233547 | orchestrator | ok: [testbed-node-0] 2025-10-08 16:04:53.233557 | orchestrator | 2025-10-08 16:04:53.233567 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2025-10-08 16:04:53.233576 | orchestrator | Wednesday 08 October 2025 16:04:10 +0000 (0:00:38.845) 0:01:44.713 ***** 2025-10-08 16:04:53.233586 | orchestrator | skipping: [testbed-node-0] 2025-10-08 16:04:53.233596 | orchestrator | changed: [testbed-node-1] 2025-10-08 16:04:53.233606 | orchestrator | changed: [testbed-node-2] 2025-10-08 16:04:53.233615 | orchestrator | 2025-10-08 16:04:53.233629 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2025-10-08 16:04:53.233639 | orchestrator | Wednesday 08 October 2025 16:04:45 +0000 (0:00:34.984) 0:02:19.698 ***** 2025-10-08 16:04:53.233649 | orchestrator | ok: [testbed-node-0] 2025-10-08 16:04:53.233659 | orchestrator | 2025-10-08 16:04:53.233669 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2025-10-08 16:04:53.233678 | orchestrator | Wednesday 08 October 2025 16:04:47 +0000 (0:00:02.300) 0:02:21.998 ***** 2025-10-08 16:04:53.233687 | orchestrator | skipping: [testbed-node-0] 2025-10-08 16:04:53.233697 | orchestrator | skipping: [testbed-node-1] 2025-10-08 16:04:53.233707 | orchestrator | skipping: [testbed-node-2] 2025-10-08 16:04:53.233716 | orchestrator | 2025-10-08 16:04:53.233726 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2025-10-08 16:04:53.233736 | orchestrator | Wednesday 08 October 2025 16:04:48 +0000 (0:00:00.519) 0:02:22.517 ***** 2025-10-08 16:04:53.233747 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2025-10-08 16:04:53.233759 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2025-10-08 16:04:53.233770 | orchestrator | 2025-10-08 16:04:53.233779 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2025-10-08 16:04:53.233789 | orchestrator | Wednesday 08 October 2025 16:04:50 +0000 (0:00:02.385) 0:02:24.903 ***** 2025-10-08 16:04:53.233798 | orchestrator | skipping: [testbed-node-0] 2025-10-08 16:04:53.233808 | orchestrator | 2025-10-08 16:04:53.233817 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-08 16:04:53.233828 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-10-08 16:04:53.233838 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-10-08 16:04:53.233848 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-10-08 16:04:53.233858 | orchestrator | 2025-10-08 16:04:53.233868 | orchestrator | 2025-10-08 16:04:53.233878 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-08 16:04:53.233888 | orchestrator | Wednesday 08 October 2025 16:04:51 +0000 (0:00:00.305) 0:02:25.208 ***** 2025-10-08 16:04:53.233897 | orchestrator | =============================================================================== 2025-10-08 16:04:53.233907 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 39.56s 2025-10-08 16:04:53.233917 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 38.85s 2025-10-08 16:04:53.233933 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 34.98s 2025-10-08 16:04:53.233942 | orchestrator | grafana : Restart first grafana container ------------------------------- 6.93s 2025-10-08 16:04:53.233952 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.39s 2025-10-08 16:04:53.233966 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.34s 2025-10-08 16:04:53.233976 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.33s 2025-10-08 16:04:53.233986 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.30s 2025-10-08 16:04:53.233995 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.49s 2025-10-08 16:04:53.234005 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.34s 2025-10-08 16:04:53.234041 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.29s 2025-10-08 16:04:53.234053 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.23s 2025-10-08 16:04:53.234063 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.21s 2025-10-08 16:04:53.234072 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.01s 2025-10-08 16:04:53.234082 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 0.78s 2025-10-08 16:04:53.234091 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.75s 2025-10-08 16:04:53.234101 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.72s 2025-10-08 16:04:53.234110 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.67s 2025-10-08 16:04:53.234120 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.61s 2025-10-08 16:04:53.234129 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.61s 2025-10-08 16:04:53.234139 | orchestrator | 2025-10-08 16:04:53 | INFO  | Task 7b93d838-8799-4349-ac0d-2e6e54b93fcc is in state STARTED 2025-10-08 16:04:53.234149 | orchestrator | 2025-10-08 16:04:53 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:04:56.273918 | orchestrator | 2025-10-08 16:04:56 | INFO  | Task c9acb2ec-d69c-41f2-b09a-cfb2bdbcdae2 is in state STARTED 2025-10-08 16:04:56.276351 | orchestrator | 2025-10-08 16:04:56 | INFO  | Task 7b93d838-8799-4349-ac0d-2e6e54b93fcc is in state STARTED 2025-10-08 16:04:56.277411 | orchestrator | 2025-10-08 16:04:56 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:04:59.337347 | orchestrator | 2025-10-08 16:04:59 | INFO  | Task c9acb2ec-d69c-41f2-b09a-cfb2bdbcdae2 is in state STARTED 2025-10-08 16:04:59.339784 | orchestrator | 2025-10-08 16:04:59 | INFO  | Task 7b93d838-8799-4349-ac0d-2e6e54b93fcc is in state STARTED 2025-10-08 16:04:59.339816 | orchestrator | 2025-10-08 16:04:59 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:05:02.386127 | orchestrator | 2025-10-08 16:05:02 | INFO  | Task c9acb2ec-d69c-41f2-b09a-cfb2bdbcdae2 is in state STARTED 2025-10-08 16:05:02.387042 | orchestrator | 2025-10-08 16:05:02 | INFO  | Task 7b93d838-8799-4349-ac0d-2e6e54b93fcc is in state STARTED 2025-10-08 16:05:02.387073 | orchestrator | 2025-10-08 16:05:02 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:05:05.428729 | orchestrator | 2025-10-08 16:05:05 | INFO  | Task c9acb2ec-d69c-41f2-b09a-cfb2bdbcdae2 is in state STARTED 2025-10-08 16:05:05.430090 | orchestrator | 2025-10-08 16:05:05 | INFO  | Task 7b93d838-8799-4349-ac0d-2e6e54b93fcc is in state STARTED 2025-10-08 16:05:05.430124 | orchestrator | 2025-10-08 16:05:05 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:05:08.475087 | orchestrator | 2025-10-08 16:05:08 | INFO  | Task c9acb2ec-d69c-41f2-b09a-cfb2bdbcdae2 is in state STARTED 2025-10-08 16:05:08.477611 | orchestrator | 2025-10-08 16:05:08 | INFO  | Task 7b93d838-8799-4349-ac0d-2e6e54b93fcc is in state STARTED 2025-10-08 16:05:08.477663 | orchestrator | 2025-10-08 16:05:08 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:05:11.522102 | orchestrator | 2025-10-08 16:05:11 | INFO  | Task c9acb2ec-d69c-41f2-b09a-cfb2bdbcdae2 is in state STARTED 2025-10-08 16:05:11.523664 | orchestrator | 2025-10-08 16:05:11 | INFO  | Task 7b93d838-8799-4349-ac0d-2e6e54b93fcc is in state STARTED 2025-10-08 16:05:11.523833 | orchestrator | 2025-10-08 16:05:11 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:05:14.577806 | orchestrator | 2025-10-08 16:05:14 | INFO  | Task c9acb2ec-d69c-41f2-b09a-cfb2bdbcdae2 is in state STARTED 2025-10-08 16:05:14.579677 | orchestrator | 2025-10-08 16:05:14 | INFO  | Task 7b93d838-8799-4349-ac0d-2e6e54b93fcc is in state STARTED 2025-10-08 16:05:14.579715 | orchestrator | 2025-10-08 16:05:14 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:05:17.629227 | orchestrator | 2025-10-08 16:05:17 | INFO  | Task c9acb2ec-d69c-41f2-b09a-cfb2bdbcdae2 is in state STARTED 2025-10-08 16:05:17.630249 | orchestrator | 2025-10-08 16:05:17 | INFO  | Task 7b93d838-8799-4349-ac0d-2e6e54b93fcc is in state STARTED 2025-10-08 16:05:17.630286 | orchestrator | 2025-10-08 16:05:17 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:05:20.675598 | orchestrator | 2025-10-08 16:05:20 | INFO  | Task c9acb2ec-d69c-41f2-b09a-cfb2bdbcdae2 is in state STARTED 2025-10-08 16:05:20.676444 | orchestrator | 2025-10-08 16:05:20 | INFO  | Task 7b93d838-8799-4349-ac0d-2e6e54b93fcc is in state STARTED 2025-10-08 16:05:20.676467 | orchestrator | 2025-10-08 16:05:20 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:05:23.724873 | orchestrator | 2025-10-08 16:05:23 | INFO  | Task c9acb2ec-d69c-41f2-b09a-cfb2bdbcdae2 is in state STARTED 2025-10-08 16:05:23.726112 | orchestrator | 2025-10-08 16:05:23 | INFO  | Task 7b93d838-8799-4349-ac0d-2e6e54b93fcc is in state STARTED 2025-10-08 16:05:23.726584 | orchestrator | 2025-10-08 16:05:23 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:05:26.773146 | orchestrator | 2025-10-08 16:05:26 | INFO  | Task c9acb2ec-d69c-41f2-b09a-cfb2bdbcdae2 is in state SUCCESS 2025-10-08 16:05:26.775483 | orchestrator | 2025-10-08 16:05:26 | INFO  | Task 7b93d838-8799-4349-ac0d-2e6e54b93fcc is in state STARTED 2025-10-08 16:05:26.775517 | orchestrator | 2025-10-08 16:05:26 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:05:29.826159 | orchestrator | 2025-10-08 16:05:29 | INFO  | Task 7b93d838-8799-4349-ac0d-2e6e54b93fcc is in state STARTED 2025-10-08 16:05:29.826307 | orchestrator | 2025-10-08 16:05:29 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:05:32.875158 | orchestrator | 2025-10-08 16:05:32 | INFO  | Task 7b93d838-8799-4349-ac0d-2e6e54b93fcc is in state STARTED 2025-10-08 16:05:32.875327 | orchestrator | 2025-10-08 16:05:32 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:05:35.923713 | orchestrator | 2025-10-08 16:05:35 | INFO  | Task 7b93d838-8799-4349-ac0d-2e6e54b93fcc is in state STARTED 2025-10-08 16:05:35.923817 | orchestrator | 2025-10-08 16:05:35 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:05:38.971405 | orchestrator | 2025-10-08 16:05:38 | INFO  | Task 7b93d838-8799-4349-ac0d-2e6e54b93fcc is in state STARTED 2025-10-08 16:05:38.971506 | orchestrator | 2025-10-08 16:05:38 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:05:42.011463 | orchestrator | 2025-10-08 16:05:42 | INFO  | Task 7b93d838-8799-4349-ac0d-2e6e54b93fcc is in state STARTED 2025-10-08 16:05:42.011558 | orchestrator | 2025-10-08 16:05:42 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:05:45.064813 | orchestrator | 2025-10-08 16:05:45 | INFO  | Task 7b93d838-8799-4349-ac0d-2e6e54b93fcc is in state STARTED 2025-10-08 16:05:45.064911 | orchestrator | 2025-10-08 16:05:45 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:05:48.104646 | orchestrator | 2025-10-08 16:05:48 | INFO  | Task 7b93d838-8799-4349-ac0d-2e6e54b93fcc is in state STARTED 2025-10-08 16:05:48.104755 | orchestrator | 2025-10-08 16:05:48 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:05:51.154428 | orchestrator | 2025-10-08 16:05:51 | INFO  | Task 7b93d838-8799-4349-ac0d-2e6e54b93fcc is in state STARTED 2025-10-08 16:05:51.154506 | orchestrator | 2025-10-08 16:05:51 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:05:54.205220 | orchestrator | 2025-10-08 16:05:54 | INFO  | Task 7b93d838-8799-4349-ac0d-2e6e54b93fcc is in state STARTED 2025-10-08 16:05:54.205324 | orchestrator | 2025-10-08 16:05:54 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:05:57.252268 | orchestrator | 2025-10-08 16:05:57 | INFO  | Task 7b93d838-8799-4349-ac0d-2e6e54b93fcc is in state STARTED 2025-10-08 16:05:57.252370 | orchestrator | 2025-10-08 16:05:57 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:06:00.307014 | orchestrator | 2025-10-08 16:06:00 | INFO  | Task 7b93d838-8799-4349-ac0d-2e6e54b93fcc is in state STARTED 2025-10-08 16:06:00.307134 | orchestrator | 2025-10-08 16:06:00 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:06:03.351494 | orchestrator | 2025-10-08 16:06:03 | INFO  | Task 7b93d838-8799-4349-ac0d-2e6e54b93fcc is in state STARTED 2025-10-08 16:06:03.351597 | orchestrator | 2025-10-08 16:06:03 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:06:06.396795 | orchestrator | 2025-10-08 16:06:06 | INFO  | Task 7b93d838-8799-4349-ac0d-2e6e54b93fcc is in state STARTED 2025-10-08 16:06:06.396908 | orchestrator | 2025-10-08 16:06:06 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:06:09.440002 | orchestrator | 2025-10-08 16:06:09 | INFO  | Task 7b93d838-8799-4349-ac0d-2e6e54b93fcc is in state STARTED 2025-10-08 16:06:09.440105 | orchestrator | 2025-10-08 16:06:09 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:06:12.484303 | orchestrator | 2025-10-08 16:06:12 | INFO  | Task 7b93d838-8799-4349-ac0d-2e6e54b93fcc is in state STARTED 2025-10-08 16:06:12.484526 | orchestrator | 2025-10-08 16:06:12 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:06:15.531508 | orchestrator | 2025-10-08 16:06:15 | INFO  | Task 7b93d838-8799-4349-ac0d-2e6e54b93fcc is in state STARTED 2025-10-08 16:06:15.531589 | orchestrator | 2025-10-08 16:06:15 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:06:18.584740 | orchestrator | 2025-10-08 16:06:18 | INFO  | Task 7b93d838-8799-4349-ac0d-2e6e54b93fcc is in state STARTED 2025-10-08 16:06:18.584849 | orchestrator | 2025-10-08 16:06:18 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:06:21.634667 | orchestrator | 2025-10-08 16:06:21 | INFO  | Task 7b93d838-8799-4349-ac0d-2e6e54b93fcc is in state STARTED 2025-10-08 16:06:21.634760 | orchestrator | 2025-10-08 16:06:21 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:06:24.686833 | orchestrator | 2025-10-08 16:06:24 | INFO  | Task 7b93d838-8799-4349-ac0d-2e6e54b93fcc is in state STARTED 2025-10-08 16:06:24.686942 | orchestrator | 2025-10-08 16:06:24 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:06:27.727442 | orchestrator | 2025-10-08 16:06:27 | INFO  | Task 7b93d838-8799-4349-ac0d-2e6e54b93fcc is in state STARTED 2025-10-08 16:06:27.727539 | orchestrator | 2025-10-08 16:06:27 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:06:30.761240 | orchestrator | 2025-10-08 16:06:30 | INFO  | Task 7b93d838-8799-4349-ac0d-2e6e54b93fcc is in state STARTED 2025-10-08 16:06:30.761352 | orchestrator | 2025-10-08 16:06:30 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:06:33.804602 | orchestrator | 2025-10-08 16:06:33 | INFO  | Task 7b93d838-8799-4349-ac0d-2e6e54b93fcc is in state STARTED 2025-10-08 16:06:33.804703 | orchestrator | 2025-10-08 16:06:33 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:06:36.837330 | orchestrator | 2025-10-08 16:06:36 | INFO  | Task 7b93d838-8799-4349-ac0d-2e6e54b93fcc is in state STARTED 2025-10-08 16:06:36.837443 | orchestrator | 2025-10-08 16:06:36 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:06:39.875519 | orchestrator | 2025-10-08 16:06:39 | INFO  | Task 7b93d838-8799-4349-ac0d-2e6e54b93fcc is in state STARTED 2025-10-08 16:06:39.875623 | orchestrator | 2025-10-08 16:06:39 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:06:42.921841 | orchestrator | 2025-10-08 16:06:42 | INFO  | Task 7b93d838-8799-4349-ac0d-2e6e54b93fcc is in state STARTED 2025-10-08 16:06:42.921958 | orchestrator | 2025-10-08 16:06:42 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:06:45.975404 | orchestrator | 2025-10-08 16:06:45 | INFO  | Task 7b93d838-8799-4349-ac0d-2e6e54b93fcc is in state STARTED 2025-10-08 16:06:45.975499 | orchestrator | 2025-10-08 16:06:45 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:06:49.016856 | orchestrator | 2025-10-08 16:06:49 | INFO  | Task 7b93d838-8799-4349-ac0d-2e6e54b93fcc is in state STARTED 2025-10-08 16:06:49.016996 | orchestrator | 2025-10-08 16:06:49 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:06:52.095882 | orchestrator | 2025-10-08 16:06:52 | INFO  | Task 7b93d838-8799-4349-ac0d-2e6e54b93fcc is in state STARTED 2025-10-08 16:06:52.095985 | orchestrator | 2025-10-08 16:06:52 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:06:55.148662 | orchestrator | 2025-10-08 16:06:55 | INFO  | Task 7b93d838-8799-4349-ac0d-2e6e54b93fcc is in state STARTED 2025-10-08 16:06:55.148761 | orchestrator | 2025-10-08 16:06:55 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:06:58.198590 | orchestrator | 2025-10-08 16:06:58 | INFO  | Task 7b93d838-8799-4349-ac0d-2e6e54b93fcc is in state STARTED 2025-10-08 16:06:58.199993 | orchestrator | 2025-10-08 16:06:58 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:07:01.246644 | orchestrator | 2025-10-08 16:07:01 | INFO  | Task 7b93d838-8799-4349-ac0d-2e6e54b93fcc is in state STARTED 2025-10-08 16:07:01.246780 | orchestrator | 2025-10-08 16:07:01 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:07:04.295065 | orchestrator | 2025-10-08 16:07:04 | INFO  | Task 7b93d838-8799-4349-ac0d-2e6e54b93fcc is in state STARTED 2025-10-08 16:07:04.295229 | orchestrator | 2025-10-08 16:07:04 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:07:07.344889 | orchestrator | 2025-10-08 16:07:07 | INFO  | Task 7b93d838-8799-4349-ac0d-2e6e54b93fcc is in state STARTED 2025-10-08 16:07:07.345023 | orchestrator | 2025-10-08 16:07:07 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:07:10.385208 | orchestrator | 2025-10-08 16:07:10 | INFO  | Task 7b93d838-8799-4349-ac0d-2e6e54b93fcc is in state STARTED 2025-10-08 16:07:10.385304 | orchestrator | 2025-10-08 16:07:10 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:07:13.430941 | orchestrator | 2025-10-08 16:07:13 | INFO  | Task 7b93d838-8799-4349-ac0d-2e6e54b93fcc is in state STARTED 2025-10-08 16:07:13.431047 | orchestrator | 2025-10-08 16:07:13 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:07:16.473918 | orchestrator | 2025-10-08 16:07:16 | INFO  | Task 7b93d838-8799-4349-ac0d-2e6e54b93fcc is in state STARTED 2025-10-08 16:07:16.474099 | orchestrator | 2025-10-08 16:07:16 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:07:19.511574 | orchestrator | 2025-10-08 16:07:19 | INFO  | Task 7b93d838-8799-4349-ac0d-2e6e54b93fcc is in state STARTED 2025-10-08 16:07:19.511659 | orchestrator | 2025-10-08 16:07:19 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:07:22.547758 | orchestrator | 2025-10-08 16:07:22 | INFO  | Task 7b93d838-8799-4349-ac0d-2e6e54b93fcc is in state STARTED 2025-10-08 16:07:22.547861 | orchestrator | 2025-10-08 16:07:22 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:07:25.594408 | orchestrator | 2025-10-08 16:07:25 | INFO  | Task 7b93d838-8799-4349-ac0d-2e6e54b93fcc is in state STARTED 2025-10-08 16:07:25.598297 | orchestrator | 2025-10-08 16:07:25 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:07:28.637167 | orchestrator | 2025-10-08 16:07:28 | INFO  | Task 7b93d838-8799-4349-ac0d-2e6e54b93fcc is in state STARTED 2025-10-08 16:07:28.637332 | orchestrator | 2025-10-08 16:07:28 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:07:31.679499 | orchestrator | 2025-10-08 16:07:31 | INFO  | Task 7b93d838-8799-4349-ac0d-2e6e54b93fcc is in state STARTED 2025-10-08 16:07:31.679640 | orchestrator | 2025-10-08 16:07:31 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:07:34.721644 | orchestrator | 2025-10-08 16:07:34 | INFO  | Task 7b93d838-8799-4349-ac0d-2e6e54b93fcc is in state STARTED 2025-10-08 16:07:34.721766 | orchestrator | 2025-10-08 16:07:34 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:07:37.771855 | orchestrator | 2025-10-08 16:07:37 | INFO  | Task 7b93d838-8799-4349-ac0d-2e6e54b93fcc is in state STARTED 2025-10-08 16:07:37.771982 | orchestrator | 2025-10-08 16:07:37 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:07:40.828764 | orchestrator | 2025-10-08 16:07:40 | INFO  | Task 7b93d838-8799-4349-ac0d-2e6e54b93fcc is in state STARTED 2025-10-08 16:07:40.828874 | orchestrator | 2025-10-08 16:07:40 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:07:43.876900 | orchestrator | 2025-10-08 16:07:43 | INFO  | Task 7b93d838-8799-4349-ac0d-2e6e54b93fcc is in state STARTED 2025-10-08 16:07:43.877011 | orchestrator | 2025-10-08 16:07:43 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:07:46.922855 | orchestrator | 2025-10-08 16:07:46 | INFO  | Task 7b93d838-8799-4349-ac0d-2e6e54b93fcc is in state STARTED 2025-10-08 16:07:46.922960 | orchestrator | 2025-10-08 16:07:46 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:07:49.966611 | orchestrator | 2025-10-08 16:07:49 | INFO  | Task 7b93d838-8799-4349-ac0d-2e6e54b93fcc is in state STARTED 2025-10-08 16:07:49.966715 | orchestrator | 2025-10-08 16:07:49 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:07:53.017422 | orchestrator | 2025-10-08 16:07:53 | INFO  | Task 7b93d838-8799-4349-ac0d-2e6e54b93fcc is in state STARTED 2025-10-08 16:07:53.017528 | orchestrator | 2025-10-08 16:07:53 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:07:56.079172 | orchestrator | 2025-10-08 16:07:56 | INFO  | Task 7b93d838-8799-4349-ac0d-2e6e54b93fcc is in state STARTED 2025-10-08 16:07:56.079316 | orchestrator | 2025-10-08 16:07:56 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:07:59.121401 | orchestrator | 2025-10-08 16:07:59 | INFO  | Task 7b93d838-8799-4349-ac0d-2e6e54b93fcc is in state STARTED 2025-10-08 16:07:59.121506 | orchestrator | 2025-10-08 16:07:59 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:08:02.164500 | orchestrator | 2025-10-08 16:08:02 | INFO  | Task 7b93d838-8799-4349-ac0d-2e6e54b93fcc is in state STARTED 2025-10-08 16:08:02.165552 | orchestrator | 2025-10-08 16:08:02 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:08:05.217760 | orchestrator | 2025-10-08 16:08:05 | INFO  | Task 7b93d838-8799-4349-ac0d-2e6e54b93fcc is in state STARTED 2025-10-08 16:08:05.217862 | orchestrator | 2025-10-08 16:08:05 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:08:08.273640 | orchestrator | 2025-10-08 16:08:08 | INFO  | Task 7b93d838-8799-4349-ac0d-2e6e54b93fcc is in state STARTED 2025-10-08 16:08:08.273739 | orchestrator | 2025-10-08 16:08:08 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:08:11.327329 | orchestrator | 2025-10-08 16:08:11 | INFO  | Task 7b93d838-8799-4349-ac0d-2e6e54b93fcc is in state STARTED 2025-10-08 16:08:11.327429 | orchestrator | 2025-10-08 16:08:11 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:08:14.383747 | orchestrator | 2025-10-08 16:08:14 | INFO  | Task 7b93d838-8799-4349-ac0d-2e6e54b93fcc is in state STARTED 2025-10-08 16:08:14.383849 | orchestrator | 2025-10-08 16:08:14 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:08:17.433515 | orchestrator | 2025-10-08 16:08:17 | INFO  | Task 7b93d838-8799-4349-ac0d-2e6e54b93fcc is in state STARTED 2025-10-08 16:08:17.433616 | orchestrator | 2025-10-08 16:08:17 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:08:20.478221 | orchestrator | 2025-10-08 16:08:20 | INFO  | Task 7b93d838-8799-4349-ac0d-2e6e54b93fcc is in state STARTED 2025-10-08 16:08:20.478331 | orchestrator | 2025-10-08 16:08:20 | INFO  | Wait 1 second(s) until the next check 2025-10-08 16:08:23.527792 | orchestrator | 2025-10-08 16:08:23 | INFO  | Task 7b93d838-8799-4349-ac0d-2e6e54b93fcc is in state SUCCESS 2025-10-08 16:08:23.530757 | orchestrator | 2025-10-08 16:08:23.530851 | orchestrator | 2025-10-08 16:08:23.530865 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2025-10-08 16:08:23.530878 | orchestrator | 2025-10-08 16:08:23.530888 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2025-10-08 16:08:23.530901 | orchestrator | Wednesday 08 October 2025 15:59:08 +0000 (0:00:00.229) 0:00:00.229 ***** 2025-10-08 16:08:23.531014 | orchestrator | changed: [localhost] 2025-10-08 16:08:23.531031 | orchestrator | 2025-10-08 16:08:23.531042 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2025-10-08 16:08:23.531052 | orchestrator | Wednesday 08 October 2025 15:59:10 +0000 (0:00:01.433) 0:00:01.663 ***** 2025-10-08 16:08:23.531062 | orchestrator | 2025-10-08 16:08:23.531072 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-10-08 16:08:23.531082 | orchestrator | 2025-10-08 16:08:23.531092 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-10-08 16:08:23.531103 | orchestrator | 2025-10-08 16:08:23.531113 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-10-08 16:08:23.531123 | orchestrator | 2025-10-08 16:08:23.531132 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-10-08 16:08:23.531142 | orchestrator | 2025-10-08 16:08:23.531152 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-10-08 16:08:23.531162 | orchestrator | 2025-10-08 16:08:23.531172 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-10-08 16:08:23.531204 | orchestrator | 2025-10-08 16:08:23.531214 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-10-08 16:08:23.531224 | orchestrator | 2025-10-08 16:08:23.531235 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-10-08 16:08:23.531245 | orchestrator | changed: [localhost] 2025-10-08 16:08:23.531741 | orchestrator | 2025-10-08 16:08:23.531762 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2025-10-08 16:08:23.531772 | orchestrator | Wednesday 08 October 2025 16:05:12 +0000 (0:06:01.931) 0:06:03.594 ***** 2025-10-08 16:08:23.531782 | orchestrator | changed: [localhost] 2025-10-08 16:08:23.531792 | orchestrator | 2025-10-08 16:08:23.531802 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-10-08 16:08:23.531812 | orchestrator | 2025-10-08 16:08:23.531822 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-10-08 16:08:23.531832 | orchestrator | Wednesday 08 October 2025 16:05:25 +0000 (0:00:13.344) 0:06:16.939 ***** 2025-10-08 16:08:23.531842 | orchestrator | ok: [testbed-node-0] 2025-10-08 16:08:23.531852 | orchestrator | ok: [testbed-node-1] 2025-10-08 16:08:23.531900 | orchestrator | ok: [testbed-node-2] 2025-10-08 16:08:23.531914 | orchestrator | 2025-10-08 16:08:23.531924 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-10-08 16:08:23.531935 | orchestrator | Wednesday 08 October 2025 16:05:25 +0000 (0:00:00.287) 0:06:17.227 ***** 2025-10-08 16:08:23.531945 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2025-10-08 16:08:23.531956 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2025-10-08 16:08:23.531967 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2025-10-08 16:08:23.531983 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2025-10-08 16:08:23.531994 | orchestrator | 2025-10-08 16:08:23.532004 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2025-10-08 16:08:23.532015 | orchestrator | skipping: no hosts matched 2025-10-08 16:08:23.532026 | orchestrator | 2025-10-08 16:08:23.532328 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-08 16:08:23.532345 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-08 16:08:23.532358 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-08 16:08:23.532369 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-08 16:08:23.532379 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-08 16:08:23.532389 | orchestrator | 2025-10-08 16:08:23.532399 | orchestrator | 2025-10-08 16:08:23.532409 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-08 16:08:23.532419 | orchestrator | Wednesday 08 October 2025 16:05:26 +0000 (0:00:00.594) 0:06:17.822 ***** 2025-10-08 16:08:23.532429 | orchestrator | =============================================================================== 2025-10-08 16:08:23.532439 | orchestrator | Download ironic-agent initramfs --------------------------------------- 361.93s 2025-10-08 16:08:23.532449 | orchestrator | Download ironic-agent kernel ------------------------------------------- 13.34s 2025-10-08 16:08:23.532459 | orchestrator | Ensure the destination directory exists --------------------------------- 1.43s 2025-10-08 16:08:23.532469 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.59s 2025-10-08 16:08:23.532479 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.29s 2025-10-08 16:08:23.532489 | orchestrator | 2025-10-08 16:08:23.532499 | orchestrator | 2025-10-08 16:08:23.532509 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-10-08 16:08:23.532519 | orchestrator | 2025-10-08 16:08:23.532528 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-10-08 16:08:23.532538 | orchestrator | Wednesday 08 October 2025 16:03:35 +0000 (0:00:00.263) 0:00:00.263 ***** 2025-10-08 16:08:23.532548 | orchestrator | ok: [testbed-node-0] 2025-10-08 16:08:23.532558 | orchestrator | ok: [testbed-node-1] 2025-10-08 16:08:23.532569 | orchestrator | ok: [testbed-node-2] 2025-10-08 16:08:23.532590 | orchestrator | 2025-10-08 16:08:23.532599 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-10-08 16:08:23.532610 | orchestrator | Wednesday 08 October 2025 16:03:35 +0000 (0:00:00.322) 0:00:00.586 ***** 2025-10-08 16:08:23.532620 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2025-10-08 16:08:23.532670 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2025-10-08 16:08:23.532682 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2025-10-08 16:08:23.532692 | orchestrator | 2025-10-08 16:08:23.532702 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2025-10-08 16:08:23.532712 | orchestrator | 2025-10-08 16:08:23.532722 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-10-08 16:08:23.532732 | orchestrator | Wednesday 08 October 2025 16:03:35 +0000 (0:00:00.474) 0:00:01.060 ***** 2025-10-08 16:08:23.532742 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-08 16:08:23.532752 | orchestrator | 2025-10-08 16:08:23.532761 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2025-10-08 16:08:23.532771 | orchestrator | Wednesday 08 October 2025 16:03:36 +0000 (0:00:00.610) 0:00:01.671 ***** 2025-10-08 16:08:23.532781 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2025-10-08 16:08:23.532791 | orchestrator | 2025-10-08 16:08:23.532800 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2025-10-08 16:08:23.532810 | orchestrator | Wednesday 08 October 2025 16:03:40 +0000 (0:00:03.695) 0:00:05.367 ***** 2025-10-08 16:08:23.532857 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2025-10-08 16:08:23.532869 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2025-10-08 16:08:23.532880 | orchestrator | 2025-10-08 16:08:23.532890 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2025-10-08 16:08:23.532901 | orchestrator | Wednesday 08 October 2025 16:03:47 +0000 (0:00:06.861) 0:00:12.229 ***** 2025-10-08 16:08:23.532911 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-10-08 16:08:23.532921 | orchestrator | 2025-10-08 16:08:23.532931 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2025-10-08 16:08:23.532942 | orchestrator | Wednesday 08 October 2025 16:03:50 +0000 (0:00:03.459) 0:00:15.689 ***** 2025-10-08 16:08:23.532952 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-10-08 16:08:23.532965 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-10-08 16:08:23.532976 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-10-08 16:08:23.532987 | orchestrator | 2025-10-08 16:08:23.532998 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2025-10-08 16:08:23.533010 | orchestrator | Wednesday 08 October 2025 16:03:59 +0000 (0:00:08.449) 0:00:24.138 ***** 2025-10-08 16:08:23.533023 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-10-08 16:08:23.533034 | orchestrator | 2025-10-08 16:08:23.533046 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2025-10-08 16:08:23.533057 | orchestrator | Wednesday 08 October 2025 16:04:02 +0000 (0:00:03.560) 0:00:27.699 ***** 2025-10-08 16:08:23.533069 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2025-10-08 16:08:23.533081 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2025-10-08 16:08:23.533092 | orchestrator | 2025-10-08 16:08:23.533104 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2025-10-08 16:08:23.533115 | orchestrator | Wednesday 08 October 2025 16:04:10 +0000 (0:00:07.601) 0:00:35.300 ***** 2025-10-08 16:08:23.533126 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2025-10-08 16:08:23.533138 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2025-10-08 16:08:23.533149 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2025-10-08 16:08:23.533168 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2025-10-08 16:08:23.533236 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2025-10-08 16:08:23.533250 | orchestrator | 2025-10-08 16:08:23.533262 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-10-08 16:08:23.533274 | orchestrator | Wednesday 08 October 2025 16:04:26 +0000 (0:00:16.147) 0:00:51.448 ***** 2025-10-08 16:08:23.533286 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-08 16:08:23.533297 | orchestrator | 2025-10-08 16:08:23.533309 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2025-10-08 16:08:23.533319 | orchestrator | Wednesday 08 October 2025 16:04:27 +0000 (0:00:00.650) 0:00:52.099 ***** 2025-10-08 16:08:23.533329 | orchestrator | changed: [testbed-node-0] 2025-10-08 16:08:23.533339 | orchestrator | 2025-10-08 16:08:23.533349 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2025-10-08 16:08:23.533359 | orchestrator | Wednesday 08 October 2025 16:04:32 +0000 (0:00:05.598) 0:00:57.697 ***** 2025-10-08 16:08:23.533369 | orchestrator | changed: [testbed-node-0] 2025-10-08 16:08:23.533379 | orchestrator | 2025-10-08 16:08:23.533389 | orchestrator | TASK [octavia : Get service project id] **************************************** 2025-10-08 16:08:23.533399 | orchestrator | Wednesday 08 October 2025 16:04:36 +0000 (0:00:04.183) 0:01:01.881 ***** 2025-10-08 16:08:23.533409 | orchestrator | ok: [testbed-node-0] 2025-10-08 16:08:23.533419 | orchestrator | 2025-10-08 16:08:23.533429 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2025-10-08 16:08:23.533439 | orchestrator | Wednesday 08 October 2025 16:04:40 +0000 (0:00:03.230) 0:01:05.112 ***** 2025-10-08 16:08:23.533449 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2025-10-08 16:08:23.533459 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2025-10-08 16:08:23.533469 | orchestrator | 2025-10-08 16:08:23.533479 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2025-10-08 16:08:23.533489 | orchestrator | Wednesday 08 October 2025 16:04:51 +0000 (0:00:11.031) 0:01:16.143 ***** 2025-10-08 16:08:23.533533 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2025-10-08 16:08:23.533546 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2025-10-08 16:08:23.533562 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2025-10-08 16:08:23.533573 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2025-10-08 16:08:23.533583 | orchestrator | 2025-10-08 16:08:23.533593 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2025-10-08 16:08:23.533603 | orchestrator | Wednesday 08 October 2025 16:05:06 +0000 (0:00:15.891) 0:01:32.035 ***** 2025-10-08 16:08:23.533613 | orchestrator | changed: [testbed-node-0] 2025-10-08 16:08:23.533623 | orchestrator | 2025-10-08 16:08:23.533633 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2025-10-08 16:08:23.533643 | orchestrator | Wednesday 08 October 2025 16:05:11 +0000 (0:00:04.461) 0:01:36.496 ***** 2025-10-08 16:08:23.533653 | orchestrator | changed: [testbed-node-0] 2025-10-08 16:08:23.533663 | orchestrator | 2025-10-08 16:08:23.533674 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2025-10-08 16:08:23.533684 | orchestrator | Wednesday 08 October 2025 16:05:16 +0000 (0:00:05.420) 0:01:41.916 ***** 2025-10-08 16:08:23.533695 | orchestrator | skipping: [testbed-node-0] 2025-10-08 16:08:23.533705 | orchestrator | 2025-10-08 16:08:23.533715 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2025-10-08 16:08:23.533726 | orchestrator | Wednesday 08 October 2025 16:05:17 +0000 (0:00:00.243) 0:01:42.159 ***** 2025-10-08 16:08:23.533743 | orchestrator | changed: [testbed-node-0] 2025-10-08 16:08:23.533754 | orchestrator | 2025-10-08 16:08:23.533764 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-10-08 16:08:23.533774 | orchestrator | Wednesday 08 October 2025 16:05:21 +0000 (0:00:04.728) 0:01:46.888 ***** 2025-10-08 16:08:23.533784 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-08 16:08:23.533794 | orchestrator | 2025-10-08 16:08:23.533804 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2025-10-08 16:08:23.533815 | orchestrator | Wednesday 08 October 2025 16:05:22 +0000 (0:00:01.022) 0:01:47.910 ***** 2025-10-08 16:08:23.533825 | orchestrator | changed: [testbed-node-2] 2025-10-08 16:08:23.533835 | orchestrator | changed: [testbed-node-1] 2025-10-08 16:08:23.533846 | orchestrator | changed: [testbed-node-0] 2025-10-08 16:08:23.533856 | orchestrator | 2025-10-08 16:08:23.533866 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2025-10-08 16:08:23.533876 | orchestrator | Wednesday 08 October 2025 16:05:28 +0000 (0:00:05.691) 0:01:53.602 ***** 2025-10-08 16:08:23.533887 | orchestrator | changed: [testbed-node-2] 2025-10-08 16:08:23.533897 | orchestrator | changed: [testbed-node-0] 2025-10-08 16:08:23.533907 | orchestrator | changed: [testbed-node-1] 2025-10-08 16:08:23.533917 | orchestrator | 2025-10-08 16:08:23.533927 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2025-10-08 16:08:23.533938 | orchestrator | Wednesday 08 October 2025 16:05:33 +0000 (0:00:05.268) 0:01:58.871 ***** 2025-10-08 16:08:23.533948 | orchestrator | changed: [testbed-node-0] 2025-10-08 16:08:23.533958 | orchestrator | changed: [testbed-node-1] 2025-10-08 16:08:23.533969 | orchestrator | changed: [testbed-node-2] 2025-10-08 16:08:23.533979 | orchestrator | 2025-10-08 16:08:23.533989 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2025-10-08 16:08:23.533999 | orchestrator | Wednesday 08 October 2025 16:05:34 +0000 (0:00:00.852) 0:01:59.724 ***** 2025-10-08 16:08:23.534009 | orchestrator | ok: [testbed-node-0] 2025-10-08 16:08:23.534082 | orchestrator | ok: [testbed-node-2] 2025-10-08 16:08:23.534093 | orchestrator | ok: [testbed-node-1] 2025-10-08 16:08:23.534103 | orchestrator | 2025-10-08 16:08:23.534112 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2025-10-08 16:08:23.534123 | orchestrator | Wednesday 08 October 2025 16:05:36 +0000 (0:00:02.013) 0:02:01.738 ***** 2025-10-08 16:08:23.534132 | orchestrator | changed: [testbed-node-1] 2025-10-08 16:08:23.534142 | orchestrator | changed: [testbed-node-0] 2025-10-08 16:08:23.534152 | orchestrator | changed: [testbed-node-2] 2025-10-08 16:08:23.534162 | orchestrator | 2025-10-08 16:08:23.534172 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2025-10-08 16:08:23.534230 | orchestrator | Wednesday 08 October 2025 16:05:37 +0000 (0:00:01.288) 0:02:03.026 ***** 2025-10-08 16:08:23.534242 | orchestrator | changed: [testbed-node-0] 2025-10-08 16:08:23.534252 | orchestrator | changed: [testbed-node-1] 2025-10-08 16:08:23.534262 | orchestrator | changed: [testbed-node-2] 2025-10-08 16:08:23.534271 | orchestrator | 2025-10-08 16:08:23.534281 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2025-10-08 16:08:23.534291 | orchestrator | Wednesday 08 October 2025 16:05:39 +0000 (0:00:01.238) 0:02:04.265 ***** 2025-10-08 16:08:23.534301 | orchestrator | changed: [testbed-node-0] 2025-10-08 16:08:23.534311 | orchestrator | changed: [testbed-node-2] 2025-10-08 16:08:23.534321 | orchestrator | changed: [testbed-node-1] 2025-10-08 16:08:23.534331 | orchestrator | 2025-10-08 16:08:23.534341 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2025-10-08 16:08:23.534349 | orchestrator | Wednesday 08 October 2025 16:05:41 +0000 (0:00:02.102) 0:02:06.367 ***** 2025-10-08 16:08:23.534358 | orchestrator | changed: [testbed-node-0] 2025-10-08 16:08:23.534366 | orchestrator | changed: [testbed-node-1] 2025-10-08 16:08:23.534374 | orchestrator | changed: [testbed-node-2] 2025-10-08 16:08:23.534382 | orchestrator | 2025-10-08 16:08:23.534396 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2025-10-08 16:08:23.534405 | orchestrator | Wednesday 08 October 2025 16:05:43 +0000 (0:00:01.776) 0:02:08.144 ***** 2025-10-08 16:08:23.534413 | orchestrator | ok: [testbed-node-0] 2025-10-08 16:08:23.534421 | orchestrator | ok: [testbed-node-1] 2025-10-08 16:08:23.534429 | orchestrator | ok: [testbed-node-2] 2025-10-08 16:08:23.534437 | orchestrator | 2025-10-08 16:08:23.534474 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2025-10-08 16:08:23.534484 | orchestrator | Wednesday 08 October 2025 16:05:43 +0000 (0:00:00.631) 0:02:08.776 ***** 2025-10-08 16:08:23.534492 | orchestrator | ok: [testbed-node-0] 2025-10-08 16:08:23.534500 | orchestrator | ok: [testbed-node-1] 2025-10-08 16:08:23.534508 | orchestrator | ok: [testbed-node-2] 2025-10-08 16:08:23.534516 | orchestrator | 2025-10-08 16:08:23.534529 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-10-08 16:08:23.534537 | orchestrator | Wednesday 08 October 2025 16:05:48 +0000 (0:00:04.708) 0:02:13.484 ***** 2025-10-08 16:08:23.534545 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-08 16:08:23.534553 | orchestrator | 2025-10-08 16:08:23.534562 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2025-10-08 16:08:23.534570 | orchestrator | Wednesday 08 October 2025 16:05:49 +0000 (0:00:00.898) 0:02:14.383 ***** 2025-10-08 16:08:23.534578 | orchestrator | ok: [testbed-node-0] 2025-10-08 16:08:23.534585 | orchestrator | 2025-10-08 16:08:23.534594 | orchestrator | TASK [octavia : Get service project id] **************************************** 2025-10-08 16:08:23.534602 | orchestrator | Wednesday 08 October 2025 16:05:52 +0000 (0:00:03.521) 0:02:17.904 ***** 2025-10-08 16:08:23.534610 | orchestrator | ok: [testbed-node-0] 2025-10-08 16:08:23.534618 | orchestrator | 2025-10-08 16:08:23.534626 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2025-10-08 16:08:23.534634 | orchestrator | Wednesday 08 October 2025 16:05:56 +0000 (0:00:03.274) 0:02:21.178 ***** 2025-10-08 16:08:23.534642 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2025-10-08 16:08:23.534650 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2025-10-08 16:08:23.534659 | orchestrator | 2025-10-08 16:08:23.534667 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2025-10-08 16:08:23.534675 | orchestrator | Wednesday 08 October 2025 16:06:02 +0000 (0:00:06.823) 0:02:28.002 ***** 2025-10-08 16:08:23.534683 | orchestrator | ok: [testbed-node-0] 2025-10-08 16:08:23.534691 | orchestrator | 2025-10-08 16:08:23.534699 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2025-10-08 16:08:23.534707 | orchestrator | Wednesday 08 October 2025 16:06:06 +0000 (0:00:03.465) 0:02:31.467 ***** 2025-10-08 16:08:23.534715 | orchestrator | ok: [testbed-node-0] 2025-10-08 16:08:23.534723 | orchestrator | ok: [testbed-node-1] 2025-10-08 16:08:23.534731 | orchestrator | ok: [testbed-node-2] 2025-10-08 16:08:23.534739 | orchestrator | 2025-10-08 16:08:23.534747 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2025-10-08 16:08:23.534756 | orchestrator | Wednesday 08 October 2025 16:06:06 +0000 (0:00:00.306) 0:02:31.773 ***** 2025-10-08 16:08:23.534767 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-10-08 16:08:23.534785 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-10-08 16:08:23.534824 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-10-08 16:08:23.534836 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-10-08 16:08:23.534845 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-10-08 16:08:23.534854 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-10-08 16:08:23.534863 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-10-08 16:08:23.534878 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-10-08 16:08:23.534886 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-10-08 16:08:23.534922 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-10-08 16:08:23.534933 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-10-08 16:08:23.534941 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-10-08 16:08:23.534950 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-10-08 16:08:23.534959 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-10-08 16:08:23.534974 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-10-08 16:08:23.534982 | orchestrator | 2025-10-08 16:08:23.534991 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2025-10-08 16:08:23.534999 | orchestrator | Wednesday 08 October 2025 16:06:09 +0000 (0:00:02.415) 0:02:34.189 ***** 2025-10-08 16:08:23.535007 | orchestrator | skipping: [testbed-node-0] 2025-10-08 16:08:23.535015 | orchestrator | 2025-10-08 16:08:23.535023 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2025-10-08 16:08:23.535031 | orchestrator | Wednesday 08 October 2025 16:06:09 +0000 (0:00:00.118) 0:02:34.308 ***** 2025-10-08 16:08:23.535039 | orchestrator | skipping: [testbed-node-0] 2025-10-08 16:08:23.535047 | orchestrator | skipping: [testbed-node-1] 2025-10-08 16:08:23.535055 | orchestrator | skipping: [testbed-node-2] 2025-10-08 16:08:23.535063 | orchestrator | 2025-10-08 16:08:23.535071 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2025-10-08 16:08:23.535079 | orchestrator | Wednesday 08 October 2025 16:06:09 +0000 (0:00:00.463) 0:02:34.772 ***** 2025-10-08 16:08:23.535117 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-10-08 16:08:23.535128 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-10-08 16:08:23.535137 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-10-08 16:08:23.535151 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-10-08 16:08:23.535160 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-10-08 16:08:23.535169 | orchestrator | skipping: [testbed-node-0] 2025-10-08 16:08:23.535177 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-10-08 16:08:23.535227 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-10-08 16:08:23.535238 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-10-08 16:08:23.535246 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-10-08 16:08:23.535260 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-10-08 16:08:23.535269 | orchestrator | skipping: [testbed-node-1] 2025-10-08 16:08:23.535278 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-10-08 16:08:23.535286 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-10-08 16:08:23.535320 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-10-08 16:08:23.535330 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-10-08 16:08:23.535338 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-10-08 16:08:23.535351 | orchestrator | skipping: [testbed-node-2] 2025-10-08 16:08:23.535360 | orchestrator | 2025-10-08 16:08:23.535368 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-10-08 16:08:23.535376 | orchestrator | Wednesday 08 October 2025 16:06:10 +0000 (0:00:00.607) 0:02:35.379 ***** 2025-10-08 16:08:23.535384 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-08 16:08:23.535393 | orchestrator | 2025-10-08 16:08:23.535401 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2025-10-08 16:08:23.535409 | orchestrator | Wednesday 08 October 2025 16:06:10 +0000 (0:00:00.483) 0:02:35.863 ***** 2025-10-08 16:08:23.535417 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-10-08 16:08:23.535426 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-10-08 16:08:23.535462 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-10-08 16:08:23.535472 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-10-08 16:08:23.535486 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-10-08 16:08:23.535495 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-10-08 16:08:23.535503 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-10-08 16:08:23.535512 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-10-08 16:08:23.535520 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-10-08 16:08:23.535537 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-10-08 16:08:23.535546 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-10-08 16:08:23.535559 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-10-08 16:08:23.535568 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-10-08 16:08:23.535576 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-10-08 16:08:23.535585 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-10-08 16:08:23.535593 | orchestrator | 2025-10-08 16:08:23.535601 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2025-10-08 16:08:23.535610 | orchestrator | Wednesday 08 October 2025 16:06:15 +0000 (0:00:05.109) 0:02:40.972 ***** 2025-10-08 16:08:23.535630 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-10-08 16:08:23.535646 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-10-08 16:08:23.535655 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-10-08 16:08:23.535663 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-10-08 16:08:23.535672 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-10-08 16:08:23.535680 | orchestrator | skipping: [testbed-node-0] 2025-10-08 16:08:23.535689 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-10-08 16:08:23.535702 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-10-08 16:08:23.535715 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-10-08 16:08:23.535729 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-10-08 16:08:23.535738 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-10-08 16:08:23.535746 | orchestrator | skipping: [testbed-node-1] 2025-10-08 16:08:23.535755 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-10-08 16:08:23.535763 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-10-08 16:08:23.535776 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-10-08 16:08:23.535789 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-10-08 16:08:23.535804 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-10-08 16:08:23.535813 | orchestrator | skipping: [testbed-node-2] 2025-10-08 16:08:23.535821 | orchestrator | 2025-10-08 16:08:23.535829 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2025-10-08 16:08:23.535837 | orchestrator | Wednesday 08 October 2025 16:06:16 +0000 (0:00:00.847) 0:02:41.819 ***** 2025-10-08 16:08:23.535845 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-10-08 16:08:23.535854 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-10-08 16:08:23.535862 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-10-08 16:08:23.535875 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-10-08 16:08:23.535892 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-10-08 16:08:23.535900 | orchestrator | skipping: [testbed-node-0] 2025-10-08 16:08:23.535909 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-10-08 16:08:23.535917 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-10-08 16:08:23.535926 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-10-08 16:08:23.535934 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-10-08 16:08:23.535942 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-10-08 16:08:23.535955 | orchestrator | skipping: [testbed-node-1] 2025-10-08 16:08:23.535971 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-10-08 16:08:23.535980 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-10-08 16:08:23.535989 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-10-08 16:08:23.535997 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-10-08 16:08:23.536006 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-10-08 16:08:23.536014 | orchestrator | skipping: [testbed-node-2] 2025-10-08 16:08:23.536022 | orchestrator | 2025-10-08 16:08:23.536030 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2025-10-08 16:08:23.536038 | orchestrator | Wednesday 08 October 2025 16:06:17 +0000 (0:00:00.838) 0:02:42.658 ***** 2025-10-08 16:08:23.536051 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-10-08 16:08:23.536069 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-10-08 16:08:23.536077 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-10-08 16:08:23.536086 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-10-08 16:08:23.536094 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-10-08 16:08:23.536102 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-10-08 16:08:23.536118 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-10-08 16:08:23.536131 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-10-08 16:08:23.536139 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-10-08 16:08:23.536147 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-10-08 16:08:23.536156 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-10-08 16:08:23.536164 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-10-08 16:08:23.536177 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-10-08 16:08:23.536205 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-10-08 16:08:23.536218 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-10-08 16:08:23.536227 | orchestrator | 2025-10-08 16:08:23.536235 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2025-10-08 16:08:23.536243 | orchestrator | Wednesday 08 October 2025 16:06:22 +0000 (0:00:05.066) 0:02:47.725 ***** 2025-10-08 16:08:23.536251 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-10-08 16:08:23.536259 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-10-08 16:08:23.536267 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-10-08 16:08:23.536275 | orchestrator | 2025-10-08 16:08:23.536283 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2025-10-08 16:08:23.536291 | orchestrator | Wednesday 08 October 2025 16:06:24 +0000 (0:00:01.961) 0:02:49.686 ***** 2025-10-08 16:08:23.536300 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-10-08 16:08:23.536308 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-10-08 16:08:23.536326 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-10-08 16:08:23.536339 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-10-08 16:08:23.536347 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-10-08 16:08:23.536356 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-10-08 16:08:23.536364 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-10-08 16:08:23.536372 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-10-08 16:08:23.536385 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-10-08 16:08:23.536394 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-10-08 16:08:23.536410 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-10-08 16:08:23.536419 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-10-08 16:08:23.536427 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-10-08 16:08:23.536436 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-10-08 16:08:23.536451 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-10-08 16:08:23.536459 | orchestrator | 2025-10-08 16:08:23.536467 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2025-10-08 16:08:23.536475 | orchestrator | Wednesday 08 October 2025 16:06:39 +0000 (0:00:15.248) 0:03:04.935 ***** 2025-10-08 16:08:23.536484 | orchestrator | changed: [testbed-node-0] 2025-10-08 16:08:23.536492 | orchestrator | changed: [testbed-node-1] 2025-10-08 16:08:23.536500 | orchestrator | changed: [testbed-node-2] 2025-10-08 16:08:23.536508 | orchestrator | 2025-10-08 16:08:23.536516 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2025-10-08 16:08:23.536524 | orchestrator | Wednesday 08 October 2025 16:06:41 +0000 (0:00:01.458) 0:03:06.393 ***** 2025-10-08 16:08:23.536532 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-10-08 16:08:23.536540 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-10-08 16:08:23.536547 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-10-08 16:08:23.536555 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-10-08 16:08:23.536563 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-10-08 16:08:23.536571 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-10-08 16:08:23.536579 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-10-08 16:08:23.536588 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-10-08 16:08:23.536595 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-10-08 16:08:23.536604 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-10-08 16:08:23.536611 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-10-08 16:08:23.536623 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-10-08 16:08:23.536631 | orchestrator | 2025-10-08 16:08:23.536639 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2025-10-08 16:08:23.536648 | orchestrator | Wednesday 08 October 2025 16:06:46 +0000 (0:00:05.402) 0:03:11.796 ***** 2025-10-08 16:08:23.536659 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-10-08 16:08:23.536667 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-10-08 16:08:23.536676 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-10-08 16:08:23.536683 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-10-08 16:08:23.536691 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-10-08 16:08:23.536700 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-10-08 16:08:23.536708 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-10-08 16:08:23.536716 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-10-08 16:08:23.536724 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-10-08 16:08:23.536732 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-10-08 16:08:23.536740 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-10-08 16:08:23.536747 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-10-08 16:08:23.536756 | orchestrator | 2025-10-08 16:08:23.536763 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2025-10-08 16:08:23.536772 | orchestrator | Wednesday 08 October 2025 16:06:52 +0000 (0:00:05.839) 0:03:17.635 ***** 2025-10-08 16:08:23.536784 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-10-08 16:08:23.536792 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-10-08 16:08:23.536800 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-10-08 16:08:23.536808 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-10-08 16:08:23.536816 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-10-08 16:08:23.536824 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-10-08 16:08:23.536832 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-10-08 16:08:23.536840 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-10-08 16:08:23.536848 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-10-08 16:08:23.536856 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-10-08 16:08:23.536864 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-10-08 16:08:23.536872 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-10-08 16:08:23.536880 | orchestrator | 2025-10-08 16:08:23.536888 | orchestrator | TASK [octavia : Check octavia containers] ************************************** 2025-10-08 16:08:23.536896 | orchestrator | Wednesday 08 October 2025 16:06:57 +0000 (0:00:05.390) 0:03:23.026 ***** 2025-10-08 16:08:23.536905 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-10-08 16:08:23.536914 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-10-08 16:08:23.536931 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-10-08 16:08:23.536946 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-10-08 16:08:23.536954 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-10-08 16:08:23.536963 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-10-08 16:08:23.536971 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-10-08 16:08:23.536979 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-10-08 16:08:23.536991 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-10-08 16:08:23.537004 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-10-08 16:08:23.537018 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-10-08 16:08:23.537026 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-10-08 16:08:23.537034 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-10-08 16:08:23.537043 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-10-08 16:08:23.537051 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-10-08 16:08:23.537059 | orchestrator | 2025-10-08 16:08:23.537067 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-10-08 16:08:23.537075 | orchestrator | Wednesday 08 October 2025 16:07:01 +0000 (0:00:03.841) 0:03:26.867 ***** 2025-10-08 16:08:23.537083 | orchestrator | skipping: [testbed-node-0] 2025-10-08 16:08:23.537091 | orchestrator | skipping: [testbed-node-1] 2025-10-08 16:08:23.537099 | orchestrator | skipping: [testbed-node-2] 2025-10-08 16:08:23.537107 | orchestrator | 2025-10-08 16:08:23.537115 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2025-10-08 16:08:23.537127 | orchestrator | Wednesday 08 October 2025 16:07:02 +0000 (0:00:00.332) 0:03:27.199 ***** 2025-10-08 16:08:23.537140 | orchestrator | changed: [testbed-node-0] 2025-10-08 16:08:23.537148 | orchestrator | 2025-10-08 16:08:23.537156 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2025-10-08 16:08:23.537164 | orchestrator | Wednesday 08 October 2025 16:07:04 +0000 (0:00:02.114) 0:03:29.314 ***** 2025-10-08 16:08:23.537176 | orchestrator | changed: [testbed-node-0] 2025-10-08 16:08:23.537226 | orchestrator | 2025-10-08 16:08:23.537235 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2025-10-08 16:08:23.537244 | orchestrator | Wednesday 08 October 2025 16:07:06 +0000 (0:00:02.191) 0:03:31.505 ***** 2025-10-08 16:08:23.537253 | orchestrator | changed: [testbed-node-0] 2025-10-08 16:08:23.537262 | orchestrator | 2025-10-08 16:08:23.537270 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2025-10-08 16:08:23.537278 | orchestrator | Wednesday 08 October 2025 16:07:08 +0000 (0:00:02.315) 0:03:33.820 ***** 2025-10-08 16:08:23.537287 | orchestrator | changed: [testbed-node-0] 2025-10-08 16:08:23.537295 | orchestrator | 2025-10-08 16:08:23.537304 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2025-10-08 16:08:23.537312 | orchestrator | Wednesday 08 October 2025 16:07:11 +0000 (0:00:02.512) 0:03:36.332 ***** 2025-10-08 16:08:23.537319 | orchestrator | changed: [testbed-node-0] 2025-10-08 16:08:23.537326 | orchestrator | 2025-10-08 16:08:23.537333 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-10-08 16:08:23.537340 | orchestrator | Wednesday 08 October 2025 16:07:32 +0000 (0:00:21.646) 0:03:57.979 ***** 2025-10-08 16:08:23.537348 | orchestrator | 2025-10-08 16:08:23.537355 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-10-08 16:08:23.537362 | orchestrator | Wednesday 08 October 2025 16:07:32 +0000 (0:00:00.065) 0:03:58.045 ***** 2025-10-08 16:08:23.537369 | orchestrator | 2025-10-08 16:08:23.537376 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-10-08 16:08:23.537383 | orchestrator | Wednesday 08 October 2025 16:07:33 +0000 (0:00:00.063) 0:03:58.108 ***** 2025-10-08 16:08:23.537390 | orchestrator | 2025-10-08 16:08:23.537397 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2025-10-08 16:08:23.537404 | orchestrator | Wednesday 08 October 2025 16:07:33 +0000 (0:00:00.065) 0:03:58.174 ***** 2025-10-08 16:08:23.537412 | orchestrator | changed: [testbed-node-0] 2025-10-08 16:08:23.537419 | orchestrator | changed: [testbed-node-2] 2025-10-08 16:08:23.537426 | orchestrator | changed: [testbed-node-1] 2025-10-08 16:08:23.537433 | orchestrator | 2025-10-08 16:08:23.537440 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2025-10-08 16:08:23.537448 | orchestrator | Wednesday 08 October 2025 16:07:49 +0000 (0:00:16.303) 0:04:14.478 ***** 2025-10-08 16:08:23.537455 | orchestrator | changed: [testbed-node-0] 2025-10-08 16:08:23.537462 | orchestrator | changed: [testbed-node-1] 2025-10-08 16:08:23.537469 | orchestrator | changed: [testbed-node-2] 2025-10-08 16:08:23.537476 | orchestrator | 2025-10-08 16:08:23.537484 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2025-10-08 16:08:23.537491 | orchestrator | Wednesday 08 October 2025 16:07:56 +0000 (0:00:07.060) 0:04:21.538 ***** 2025-10-08 16:08:23.537498 | orchestrator | changed: [testbed-node-0] 2025-10-08 16:08:23.537505 | orchestrator | changed: [testbed-node-1] 2025-10-08 16:08:23.537512 | orchestrator | changed: [testbed-node-2] 2025-10-08 16:08:23.537520 | orchestrator | 2025-10-08 16:08:23.537527 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2025-10-08 16:08:23.537534 | orchestrator | Wednesday 08 October 2025 16:08:01 +0000 (0:00:05.552) 0:04:27.091 ***** 2025-10-08 16:08:23.537541 | orchestrator | changed: [testbed-node-0] 2025-10-08 16:08:23.537548 | orchestrator | changed: [testbed-node-2] 2025-10-08 16:08:23.537555 | orchestrator | changed: [testbed-node-1] 2025-10-08 16:08:23.537562 | orchestrator | 2025-10-08 16:08:23.537569 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2025-10-08 16:08:23.537577 | orchestrator | Wednesday 08 October 2025 16:08:12 +0000 (0:00:10.404) 0:04:37.496 ***** 2025-10-08 16:08:23.537589 | orchestrator | changed: [testbed-node-2] 2025-10-08 16:08:23.537596 | orchestrator | changed: [testbed-node-0] 2025-10-08 16:08:23.537603 | orchestrator | changed: [testbed-node-1] 2025-10-08 16:08:23.537610 | orchestrator | 2025-10-08 16:08:23.537617 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-08 16:08:23.537625 | orchestrator | testbed-node-0 : ok=57  changed=39  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-10-08 16:08:23.537632 | orchestrator | testbed-node-1 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-10-08 16:08:23.537640 | orchestrator | testbed-node-2 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-10-08 16:08:23.537647 | orchestrator | 2025-10-08 16:08:23.537654 | orchestrator | 2025-10-08 16:08:23.537661 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-08 16:08:23.537668 | orchestrator | Wednesday 08 October 2025 16:08:22 +0000 (0:00:10.447) 0:04:47.943 ***** 2025-10-08 16:08:23.537675 | orchestrator | =============================================================================== 2025-10-08 16:08:23.537683 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 21.65s 2025-10-08 16:08:23.537690 | orchestrator | octavia : Restart octavia-api container -------------------------------- 16.30s 2025-10-08 16:08:23.537697 | orchestrator | octavia : Adding octavia related roles --------------------------------- 16.15s 2025-10-08 16:08:23.537704 | orchestrator | octavia : Add rules for security groups -------------------------------- 15.89s 2025-10-08 16:08:23.537711 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 15.25s 2025-10-08 16:08:23.537718 | orchestrator | octavia : Create security groups for octavia --------------------------- 11.03s 2025-10-08 16:08:23.537729 | orchestrator | octavia : Restart octavia-worker container ----------------------------- 10.45s 2025-10-08 16:08:23.537737 | orchestrator | octavia : Restart octavia-housekeeping container ----------------------- 10.40s 2025-10-08 16:08:23.537744 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 8.45s 2025-10-08 16:08:23.537754 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 7.60s 2025-10-08 16:08:23.537762 | orchestrator | octavia : Restart octavia-driver-agent container ------------------------ 7.06s 2025-10-08 16:08:23.537769 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 6.86s 2025-10-08 16:08:23.537776 | orchestrator | octavia : Get security groups for octavia ------------------------------- 6.82s 2025-10-08 16:08:23.537783 | orchestrator | octavia : Copying certificate files for octavia-housekeeping ------------ 5.84s 2025-10-08 16:08:23.537790 | orchestrator | octavia : Create ports for Octavia health-manager nodes ----------------- 5.69s 2025-10-08 16:08:23.537797 | orchestrator | octavia : Create amphora flavor ----------------------------------------- 5.60s 2025-10-08 16:08:23.537804 | orchestrator | octavia : Restart octavia-health-manager container ---------------------- 5.55s 2025-10-08 16:08:23.537812 | orchestrator | octavia : Create loadbalancer management subnet ------------------------- 5.42s 2025-10-08 16:08:23.537819 | orchestrator | octavia : Copying certificate files for octavia-worker ------------------ 5.40s 2025-10-08 16:08:23.537826 | orchestrator | octavia : Copying certificate files for octavia-health-manager ---------- 5.39s 2025-10-08 16:08:23.537833 | orchestrator | 2025-10-08 16:08:23 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-10-08 16:08:26.566556 | orchestrator | 2025-10-08 16:08:26 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-10-08 16:08:29.603753 | orchestrator | 2025-10-08 16:08:29 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-10-08 16:08:32.644614 | orchestrator | 2025-10-08 16:08:32 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-10-08 16:08:35.695460 | orchestrator | 2025-10-08 16:08:35 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-10-08 16:08:38.741472 | orchestrator | 2025-10-08 16:08:38 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-10-08 16:08:41.787663 | orchestrator | 2025-10-08 16:08:41 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-10-08 16:08:44.828222 | orchestrator | 2025-10-08 16:08:44 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-10-08 16:08:47.870120 | orchestrator | 2025-10-08 16:08:47 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-10-08 16:08:50.911532 | orchestrator | 2025-10-08 16:08:50 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-10-08 16:08:53.951648 | orchestrator | 2025-10-08 16:08:53 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-10-08 16:08:56.991929 | orchestrator | 2025-10-08 16:08:56 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-10-08 16:09:00.034064 | orchestrator | 2025-10-08 16:09:00 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-10-08 16:09:03.082051 | orchestrator | 2025-10-08 16:09:03 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-10-08 16:09:06.127291 | orchestrator | 2025-10-08 16:09:06 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-10-08 16:09:09.168467 | orchestrator | 2025-10-08 16:09:09 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-10-08 16:09:12.218247 | orchestrator | 2025-10-08 16:09:12 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-10-08 16:09:15.263067 | orchestrator | 2025-10-08 16:09:15 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-10-08 16:09:18.307569 | orchestrator | 2025-10-08 16:09:18 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-10-08 16:09:21.347487 | orchestrator | 2025-10-08 16:09:21 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-10-08 16:09:24.392709 | orchestrator | 2025-10-08 16:09:24.764073 | orchestrator | 2025-10-08 16:09:24.769325 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Wed Oct 8 16:09:24 UTC 2025 2025-10-08 16:09:24.769359 | orchestrator | 2025-10-08 16:09:25.138945 | orchestrator | ok: Runtime: 0:35:39.782619 2025-10-08 16:09:25.411131 | 2025-10-08 16:09:25.411285 | TASK [Bootstrap services] 2025-10-08 16:09:26.165395 | orchestrator | 2025-10-08 16:09:26.165574 | orchestrator | # BOOTSTRAP 2025-10-08 16:09:26.165601 | orchestrator | 2025-10-08 16:09:26.165616 | orchestrator | + set -e 2025-10-08 16:09:26.165630 | orchestrator | + echo 2025-10-08 16:09:26.165644 | orchestrator | + echo '# BOOTSTRAP' 2025-10-08 16:09:26.165662 | orchestrator | + echo 2025-10-08 16:09:26.165709 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2025-10-08 16:09:26.178486 | orchestrator | + set -e 2025-10-08 16:09:26.178544 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2025-10-08 16:09:31.011265 | orchestrator | 2025-10-08 16:09:31 | INFO  | It takes a moment until task 6fe85d09-d184-4c43-82b8-7084557feb6c (flavor-manager) has been started and output is visible here. 2025-10-08 16:09:38.522357 | orchestrator | 2025-10-08 16:09:34 | INFO  | Flavor SCS-1L-1 created 2025-10-08 16:09:38.522505 | orchestrator | 2025-10-08 16:09:34 | INFO  | Flavor SCS-1L-1-5 created 2025-10-08 16:09:38.522525 | orchestrator | 2025-10-08 16:09:34 | INFO  | Flavor SCS-1V-2 created 2025-10-08 16:09:38.522539 | orchestrator | 2025-10-08 16:09:34 | INFO  | Flavor SCS-1V-2-5 created 2025-10-08 16:09:38.522551 | orchestrator | 2025-10-08 16:09:34 | INFO  | Flavor SCS-1V-4 created 2025-10-08 16:09:38.522564 | orchestrator | 2025-10-08 16:09:34 | INFO  | Flavor SCS-1V-4-10 created 2025-10-08 16:09:38.522576 | orchestrator | 2025-10-08 16:09:35 | INFO  | Flavor SCS-1V-8 created 2025-10-08 16:09:38.522589 | orchestrator | 2025-10-08 16:09:35 | INFO  | Flavor SCS-1V-8-20 created 2025-10-08 16:09:38.522612 | orchestrator | 2025-10-08 16:09:35 | INFO  | Flavor SCS-2V-4 created 2025-10-08 16:09:38.522624 | orchestrator | 2025-10-08 16:09:35 | INFO  | Flavor SCS-2V-4-10 created 2025-10-08 16:09:38.522636 | orchestrator | 2025-10-08 16:09:35 | INFO  | Flavor SCS-2V-8 created 2025-10-08 16:09:38.522648 | orchestrator | 2025-10-08 16:09:35 | INFO  | Flavor SCS-2V-8-20 created 2025-10-08 16:09:38.522659 | orchestrator | 2025-10-08 16:09:35 | INFO  | Flavor SCS-2V-16 created 2025-10-08 16:09:38.522671 | orchestrator | 2025-10-08 16:09:36 | INFO  | Flavor SCS-2V-16-50 created 2025-10-08 16:09:38.522682 | orchestrator | 2025-10-08 16:09:36 | INFO  | Flavor SCS-4V-8 created 2025-10-08 16:09:38.522695 | orchestrator | 2025-10-08 16:09:36 | INFO  | Flavor SCS-4V-8-20 created 2025-10-08 16:09:38.522706 | orchestrator | 2025-10-08 16:09:36 | INFO  | Flavor SCS-4V-16 created 2025-10-08 16:09:38.522717 | orchestrator | 2025-10-08 16:09:36 | INFO  | Flavor SCS-4V-16-50 created 2025-10-08 16:09:38.522728 | orchestrator | 2025-10-08 16:09:36 | INFO  | Flavor SCS-4V-32 created 2025-10-08 16:09:38.522740 | orchestrator | 2025-10-08 16:09:36 | INFO  | Flavor SCS-4V-32-100 created 2025-10-08 16:09:38.522751 | orchestrator | 2025-10-08 16:09:37 | INFO  | Flavor SCS-8V-16 created 2025-10-08 16:09:38.522763 | orchestrator | 2025-10-08 16:09:37 | INFO  | Flavor SCS-8V-16-50 created 2025-10-08 16:09:38.522775 | orchestrator | 2025-10-08 16:09:37 | INFO  | Flavor SCS-8V-32 created 2025-10-08 16:09:38.522787 | orchestrator | 2025-10-08 16:09:37 | INFO  | Flavor SCS-8V-32-100 created 2025-10-08 16:09:38.522798 | orchestrator | 2025-10-08 16:09:37 | INFO  | Flavor SCS-16V-32 created 2025-10-08 16:09:38.522810 | orchestrator | 2025-10-08 16:09:37 | INFO  | Flavor SCS-16V-32-100 created 2025-10-08 16:09:38.522838 | orchestrator | 2025-10-08 16:09:37 | INFO  | Flavor SCS-2V-4-20s created 2025-10-08 16:09:38.522850 | orchestrator | 2025-10-08 16:09:38 | INFO  | Flavor SCS-4V-8-50s created 2025-10-08 16:09:38.522861 | orchestrator | 2025-10-08 16:09:38 | INFO  | Flavor SCS-8V-32-100s created 2025-10-08 16:09:40.940115 | orchestrator | 2025-10-08 16:09:40 | INFO  | Trying to run play bootstrap-basic in environment openstack 2025-10-08 16:09:51.087963 | orchestrator | 2025-10-08 16:09:51 | INFO  | Task 86b64513-82e1-49c2-8c93-e88e33bc23ab (bootstrap-basic) was prepared for execution. 2025-10-08 16:09:51.088087 | orchestrator | 2025-10-08 16:09:51 | INFO  | It takes a moment until task 86b64513-82e1-49c2-8c93-e88e33bc23ab (bootstrap-basic) has been started and output is visible here. 2025-10-08 16:10:52.402571 | orchestrator | 2025-10-08 16:10:52.402695 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2025-10-08 16:10:52.402712 | orchestrator | 2025-10-08 16:10:52.402724 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-10-08 16:10:52.402737 | orchestrator | Wednesday 08 October 2025 16:09:55 +0000 (0:00:00.083) 0:00:00.083 ***** 2025-10-08 16:10:52.402749 | orchestrator | ok: [localhost] 2025-10-08 16:10:52.402761 | orchestrator | 2025-10-08 16:10:52.402773 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2025-10-08 16:10:52.402784 | orchestrator | Wednesday 08 October 2025 16:09:57 +0000 (0:00:01.966) 0:00:02.050 ***** 2025-10-08 16:10:52.402795 | orchestrator | ok: [localhost] 2025-10-08 16:10:52.402806 | orchestrator | 2025-10-08 16:10:52.402818 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2025-10-08 16:10:52.402829 | orchestrator | Wednesday 08 October 2025 16:10:06 +0000 (0:00:08.559) 0:00:10.610 ***** 2025-10-08 16:10:52.402840 | orchestrator | changed: [localhost] 2025-10-08 16:10:52.402852 | orchestrator | 2025-10-08 16:10:52.402863 | orchestrator | TASK [Get volume type local] *************************************************** 2025-10-08 16:10:52.402874 | orchestrator | Wednesday 08 October 2025 16:10:13 +0000 (0:00:07.362) 0:00:17.973 ***** 2025-10-08 16:10:52.402886 | orchestrator | ok: [localhost] 2025-10-08 16:10:52.402897 | orchestrator | 2025-10-08 16:10:52.402908 | orchestrator | TASK [Create volume type local] ************************************************ 2025-10-08 16:10:52.402919 | orchestrator | Wednesday 08 October 2025 16:10:20 +0000 (0:00:07.180) 0:00:25.154 ***** 2025-10-08 16:10:52.402935 | orchestrator | changed: [localhost] 2025-10-08 16:10:52.402946 | orchestrator | 2025-10-08 16:10:52.402957 | orchestrator | TASK [Create public network] *************************************************** 2025-10-08 16:10:52.402968 | orchestrator | Wednesday 08 October 2025 16:10:28 +0000 (0:00:07.927) 0:00:33.081 ***** 2025-10-08 16:10:52.402979 | orchestrator | changed: [localhost] 2025-10-08 16:10:52.402990 | orchestrator | 2025-10-08 16:10:52.403002 | orchestrator | TASK [Set public network to default] ******************************************* 2025-10-08 16:10:52.403013 | orchestrator | Wednesday 08 October 2025 16:10:33 +0000 (0:00:05.479) 0:00:38.561 ***** 2025-10-08 16:10:52.403024 | orchestrator | changed: [localhost] 2025-10-08 16:10:52.403035 | orchestrator | 2025-10-08 16:10:52.403046 | orchestrator | TASK [Create public subnet] **************************************************** 2025-10-08 16:10:52.403068 | orchestrator | Wednesday 08 October 2025 16:10:40 +0000 (0:00:06.492) 0:00:45.054 ***** 2025-10-08 16:10:52.403080 | orchestrator | changed: [localhost] 2025-10-08 16:10:52.403091 | orchestrator | 2025-10-08 16:10:52.403104 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2025-10-08 16:10:52.403116 | orchestrator | Wednesday 08 October 2025 16:10:44 +0000 (0:00:04.400) 0:00:49.454 ***** 2025-10-08 16:10:52.403128 | orchestrator | changed: [localhost] 2025-10-08 16:10:52.403141 | orchestrator | 2025-10-08 16:10:52.403153 | orchestrator | TASK [Create manager role] ***************************************************** 2025-10-08 16:10:52.403165 | orchestrator | Wednesday 08 October 2025 16:10:48 +0000 (0:00:03.842) 0:00:53.297 ***** 2025-10-08 16:10:52.403178 | orchestrator | ok: [localhost] 2025-10-08 16:10:52.403217 | orchestrator | 2025-10-08 16:10:52.403230 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-08 16:10:52.403243 | orchestrator | localhost : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-08 16:10:52.403255 | orchestrator | 2025-10-08 16:10:52.403267 | orchestrator | 2025-10-08 16:10:52.403280 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-08 16:10:52.403319 | orchestrator | Wednesday 08 October 2025 16:10:52 +0000 (0:00:03.409) 0:00:56.707 ***** 2025-10-08 16:10:52.403332 | orchestrator | =============================================================================== 2025-10-08 16:10:52.403344 | orchestrator | Get volume type LUKS ---------------------------------------------------- 8.56s 2025-10-08 16:10:52.403356 | orchestrator | Create volume type local ------------------------------------------------ 7.93s 2025-10-08 16:10:52.403369 | orchestrator | Create volume type LUKS ------------------------------------------------- 7.36s 2025-10-08 16:10:52.403381 | orchestrator | Get volume type local --------------------------------------------------- 7.18s 2025-10-08 16:10:52.403393 | orchestrator | Set public network to default ------------------------------------------- 6.49s 2025-10-08 16:10:52.403405 | orchestrator | Create public network --------------------------------------------------- 5.48s 2025-10-08 16:10:52.403418 | orchestrator | Create public subnet ---------------------------------------------------- 4.40s 2025-10-08 16:10:52.403430 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 3.84s 2025-10-08 16:10:52.403442 | orchestrator | Create manager role ----------------------------------------------------- 3.41s 2025-10-08 16:10:52.403453 | orchestrator | Gathering Facts --------------------------------------------------------- 1.97s 2025-10-08 16:10:54.884027 | orchestrator | 2025-10-08 16:10:54 | INFO  | It takes a moment until task 2a569e59-7db0-44ba-98d1-0ceba539f6d7 (image-manager) has been started and output is visible here. 2025-10-08 16:11:37.403996 | orchestrator | 2025-10-08 16:10:57 | INFO  | Processing image 'Cirros 0.6.2' 2025-10-08 16:11:37.404114 | orchestrator | 2025-10-08 16:10:57 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2025-10-08 16:11:37.404134 | orchestrator | 2025-10-08 16:10:57 | INFO  | Importing image Cirros 0.6.2 2025-10-08 16:11:37.404146 | orchestrator | 2025-10-08 16:10:57 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2025-10-08 16:11:37.404193 | orchestrator | 2025-10-08 16:11:00 | INFO  | Waiting for image to leave queued state... 2025-10-08 16:11:37.404207 | orchestrator | 2025-10-08 16:11:02 | INFO  | Waiting for import to complete... 2025-10-08 16:11:37.404219 | orchestrator | 2025-10-08 16:11:12 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2025-10-08 16:11:37.404230 | orchestrator | 2025-10-08 16:11:12 | INFO  | Checking parameters of 'Cirros 0.6.2' 2025-10-08 16:11:37.404241 | orchestrator | 2025-10-08 16:11:12 | INFO  | Setting internal_version = 0.6.2 2025-10-08 16:11:37.404253 | orchestrator | 2025-10-08 16:11:12 | INFO  | Setting image_original_user = cirros 2025-10-08 16:11:37.404264 | orchestrator | 2025-10-08 16:11:12 | INFO  | Adding tag os:cirros 2025-10-08 16:11:37.404276 | orchestrator | 2025-10-08 16:11:13 | INFO  | Setting property architecture: x86_64 2025-10-08 16:11:37.404286 | orchestrator | 2025-10-08 16:11:13 | INFO  | Setting property hw_disk_bus: scsi 2025-10-08 16:11:37.404297 | orchestrator | 2025-10-08 16:11:13 | INFO  | Setting property hw_rng_model: virtio 2025-10-08 16:11:37.404309 | orchestrator | 2025-10-08 16:11:13 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-10-08 16:11:37.404320 | orchestrator | 2025-10-08 16:11:14 | INFO  | Setting property hw_watchdog_action: reset 2025-10-08 16:11:37.404331 | orchestrator | 2025-10-08 16:11:14 | INFO  | Setting property hypervisor_type: qemu 2025-10-08 16:11:37.404342 | orchestrator | 2025-10-08 16:11:14 | INFO  | Setting property os_distro: cirros 2025-10-08 16:11:37.404353 | orchestrator | 2025-10-08 16:11:14 | INFO  | Setting property os_purpose: minimal 2025-10-08 16:11:37.404363 | orchestrator | 2025-10-08 16:11:14 | INFO  | Setting property replace_frequency: never 2025-10-08 16:11:37.404401 | orchestrator | 2025-10-08 16:11:15 | INFO  | Setting property uuid_validity: none 2025-10-08 16:11:37.404413 | orchestrator | 2025-10-08 16:11:15 | INFO  | Setting property provided_until: none 2025-10-08 16:11:37.404432 | orchestrator | 2025-10-08 16:11:15 | INFO  | Setting property image_description: Cirros 2025-10-08 16:11:37.404449 | orchestrator | 2025-10-08 16:11:15 | INFO  | Setting property image_name: Cirros 2025-10-08 16:11:37.404460 | orchestrator | 2025-10-08 16:11:16 | INFO  | Setting property internal_version: 0.6.2 2025-10-08 16:11:37.404471 | orchestrator | 2025-10-08 16:11:16 | INFO  | Setting property image_original_user: cirros 2025-10-08 16:11:37.404482 | orchestrator | 2025-10-08 16:11:16 | INFO  | Setting property os_version: 0.6.2 2025-10-08 16:11:37.404493 | orchestrator | 2025-10-08 16:11:16 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2025-10-08 16:11:37.404505 | orchestrator | 2025-10-08 16:11:17 | INFO  | Setting property image_build_date: 2023-05-30 2025-10-08 16:11:37.404517 | orchestrator | 2025-10-08 16:11:17 | INFO  | Checking status of 'Cirros 0.6.2' 2025-10-08 16:11:37.404529 | orchestrator | 2025-10-08 16:11:17 | INFO  | Checking visibility of 'Cirros 0.6.2' 2025-10-08 16:11:37.404542 | orchestrator | 2025-10-08 16:11:17 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2025-10-08 16:11:37.404554 | orchestrator | 2025-10-08 16:11:17 | INFO  | Processing image 'Cirros 0.6.3' 2025-10-08 16:11:37.404567 | orchestrator | 2025-10-08 16:11:17 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2025-10-08 16:11:37.404580 | orchestrator | 2025-10-08 16:11:17 | INFO  | Importing image Cirros 0.6.3 2025-10-08 16:11:37.404592 | orchestrator | 2025-10-08 16:11:17 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2025-10-08 16:11:37.404604 | orchestrator | 2025-10-08 16:11:19 | INFO  | Waiting for image to leave queued state... 2025-10-08 16:11:37.404617 | orchestrator | 2025-10-08 16:11:21 | INFO  | Waiting for import to complete... 2025-10-08 16:11:37.404646 | orchestrator | 2025-10-08 16:11:31 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2025-10-08 16:11:37.404660 | orchestrator | 2025-10-08 16:11:31 | INFO  | Checking parameters of 'Cirros 0.6.3' 2025-10-08 16:11:37.404672 | orchestrator | 2025-10-08 16:11:31 | INFO  | Setting internal_version = 0.6.3 2025-10-08 16:11:37.404684 | orchestrator | 2025-10-08 16:11:31 | INFO  | Setting image_original_user = cirros 2025-10-08 16:11:37.404697 | orchestrator | 2025-10-08 16:11:31 | INFO  | Adding tag os:cirros 2025-10-08 16:11:37.404709 | orchestrator | 2025-10-08 16:11:32 | INFO  | Setting property architecture: x86_64 2025-10-08 16:11:37.404722 | orchestrator | 2025-10-08 16:11:32 | INFO  | Setting property hw_disk_bus: scsi 2025-10-08 16:11:37.404734 | orchestrator | 2025-10-08 16:11:32 | INFO  | Setting property hw_rng_model: virtio 2025-10-08 16:11:37.404746 | orchestrator | 2025-10-08 16:11:32 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-10-08 16:11:37.404759 | orchestrator | 2025-10-08 16:11:33 | INFO  | Setting property hw_watchdog_action: reset 2025-10-08 16:11:37.404771 | orchestrator | 2025-10-08 16:11:33 | INFO  | Setting property hypervisor_type: qemu 2025-10-08 16:11:37.404783 | orchestrator | 2025-10-08 16:11:33 | INFO  | Setting property os_distro: cirros 2025-10-08 16:11:37.404803 | orchestrator | 2025-10-08 16:11:33 | INFO  | Setting property os_purpose: minimal 2025-10-08 16:11:37.404816 | orchestrator | 2025-10-08 16:11:33 | INFO  | Setting property replace_frequency: never 2025-10-08 16:11:37.404829 | orchestrator | 2025-10-08 16:11:34 | INFO  | Setting property uuid_validity: none 2025-10-08 16:11:37.404841 | orchestrator | 2025-10-08 16:11:34 | INFO  | Setting property provided_until: none 2025-10-08 16:11:37.404853 | orchestrator | 2025-10-08 16:11:34 | INFO  | Setting property image_description: Cirros 2025-10-08 16:11:37.404865 | orchestrator | 2025-10-08 16:11:34 | INFO  | Setting property image_name: Cirros 2025-10-08 16:11:37.404876 | orchestrator | 2025-10-08 16:11:35 | INFO  | Setting property internal_version: 0.6.3 2025-10-08 16:11:37.404887 | orchestrator | 2025-10-08 16:11:35 | INFO  | Setting property image_original_user: cirros 2025-10-08 16:11:37.404898 | orchestrator | 2025-10-08 16:11:35 | INFO  | Setting property os_version: 0.6.3 2025-10-08 16:11:37.404909 | orchestrator | 2025-10-08 16:11:36 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2025-10-08 16:11:37.404920 | orchestrator | 2025-10-08 16:11:36 | INFO  | Setting property image_build_date: 2024-09-26 2025-10-08 16:11:37.404936 | orchestrator | 2025-10-08 16:11:36 | INFO  | Checking status of 'Cirros 0.6.3' 2025-10-08 16:11:37.404947 | orchestrator | 2025-10-08 16:11:36 | INFO  | Checking visibility of 'Cirros 0.6.3' 2025-10-08 16:11:37.404958 | orchestrator | 2025-10-08 16:11:36 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2025-10-08 16:11:37.718889 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2025-10-08 16:11:39.887568 | orchestrator | 2025-10-08 16:11:39 | INFO  | date: 2025-10-08 2025-10-08 16:11:39.887666 | orchestrator | 2025-10-08 16:11:39 | INFO  | image: octavia-amphora-haproxy-2024.2.20251008.qcow2 2025-10-08 16:11:39.887890 | orchestrator | 2025-10-08 16:11:39 | INFO  | url: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20251008.qcow2 2025-10-08 16:11:39.887936 | orchestrator | 2025-10-08 16:11:39 | INFO  | checksum_url: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20251008.qcow2.CHECKSUM 2025-10-08 16:11:39.928812 | orchestrator | 2025-10-08 16:11:39 | INFO  | checksum: adb321befa65788c534b9230450a301179ddd808afbfbbfbf0ac38ad111b76d2 2025-10-08 16:11:39.994940 | orchestrator | 2025-10-08 16:11:39 | INFO  | It takes a moment until task cbd5ec06-417f-41fb-aa80-b871b30def99 (image-manager) has been started and output is visible here. 2025-10-08 16:12:41.617115 | orchestrator | 2025-10-08 16:11:42 | INFO  | Processing image 'OpenStack Octavia Amphora 2025-10-08' 2025-10-08 16:12:41.617226 | orchestrator | 2025-10-08 16:11:42 | INFO  | Tested URL https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20251008.qcow2: 200 2025-10-08 16:12:41.617246 | orchestrator | 2025-10-08 16:11:42 | INFO  | Importing image OpenStack Octavia Amphora 2025-10-08 2025-10-08 16:12:41.617257 | orchestrator | 2025-10-08 16:11:42 | INFO  | Importing from URL https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20251008.qcow2 2025-10-08 16:12:41.617269 | orchestrator | 2025-10-08 16:11:43 | INFO  | Waiting for image to leave queued state... 2025-10-08 16:12:41.617280 | orchestrator | 2025-10-08 16:11:45 | INFO  | Waiting for import to complete... 2025-10-08 16:12:41.617358 | orchestrator | 2025-10-08 16:11:55 | INFO  | Waiting for import to complete... 2025-10-08 16:12:41.617369 | orchestrator | 2025-10-08 16:12:05 | INFO  | Waiting for import to complete... 2025-10-08 16:12:41.617379 | orchestrator | 2025-10-08 16:12:16 | INFO  | Waiting for import to complete... 2025-10-08 16:12:41.617388 | orchestrator | 2025-10-08 16:12:26 | INFO  | Waiting for import to complete... 2025-10-08 16:12:41.617398 | orchestrator | 2025-10-08 16:12:36 | INFO  | Import of 'OpenStack Octavia Amphora 2025-10-08' successfully completed, reloading images 2025-10-08 16:12:41.617410 | orchestrator | 2025-10-08 16:12:36 | INFO  | Checking parameters of 'OpenStack Octavia Amphora 2025-10-08' 2025-10-08 16:12:41.617420 | orchestrator | 2025-10-08 16:12:36 | INFO  | Setting internal_version = 2025-10-08 2025-10-08 16:12:41.617429 | orchestrator | 2025-10-08 16:12:36 | INFO  | Setting image_original_user = ubuntu 2025-10-08 16:12:41.617440 | orchestrator | 2025-10-08 16:12:36 | INFO  | Adding tag amphora 2025-10-08 16:12:41.617450 | orchestrator | 2025-10-08 16:12:36 | INFO  | Adding tag os:ubuntu 2025-10-08 16:12:41.617460 | orchestrator | 2025-10-08 16:12:37 | INFO  | Setting property architecture: x86_64 2025-10-08 16:12:41.617469 | orchestrator | 2025-10-08 16:12:37 | INFO  | Setting property hw_disk_bus: scsi 2025-10-08 16:12:41.617479 | orchestrator | 2025-10-08 16:12:37 | INFO  | Setting property hw_rng_model: virtio 2025-10-08 16:12:41.617489 | orchestrator | 2025-10-08 16:12:37 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-10-08 16:12:41.617513 | orchestrator | 2025-10-08 16:12:38 | INFO  | Setting property hw_watchdog_action: reset 2025-10-08 16:12:41.617523 | orchestrator | 2025-10-08 16:12:38 | INFO  | Setting property hypervisor_type: qemu 2025-10-08 16:12:41.617533 | orchestrator | 2025-10-08 16:12:38 | INFO  | Setting property os_distro: ubuntu 2025-10-08 16:12:41.617543 | orchestrator | 2025-10-08 16:12:38 | INFO  | Setting property replace_frequency: quarterly 2025-10-08 16:12:41.617552 | orchestrator | 2025-10-08 16:12:39 | INFO  | Setting property uuid_validity: last-1 2025-10-08 16:12:41.617562 | orchestrator | 2025-10-08 16:12:39 | INFO  | Setting property provided_until: none 2025-10-08 16:12:41.617572 | orchestrator | 2025-10-08 16:12:39 | INFO  | Setting property os_purpose: network 2025-10-08 16:12:41.617582 | orchestrator | 2025-10-08 16:12:39 | INFO  | Setting property image_description: OpenStack Octavia Amphora 2025-10-08 16:12:41.617591 | orchestrator | 2025-10-08 16:12:39 | INFO  | Setting property image_name: OpenStack Octavia Amphora 2025-10-08 16:12:41.617601 | orchestrator | 2025-10-08 16:12:40 | INFO  | Setting property internal_version: 2025-10-08 2025-10-08 16:12:41.617611 | orchestrator | 2025-10-08 16:12:40 | INFO  | Setting property image_original_user: ubuntu 2025-10-08 16:12:41.617621 | orchestrator | 2025-10-08 16:12:40 | INFO  | Setting property os_version: 2025-10-08 2025-10-08 16:12:41.617631 | orchestrator | 2025-10-08 16:12:40 | INFO  | Setting property image_source: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20251008.qcow2 2025-10-08 16:12:41.617641 | orchestrator | 2025-10-08 16:12:41 | INFO  | Setting property image_build_date: 2025-10-08 2025-10-08 16:12:41.617651 | orchestrator | 2025-10-08 16:12:41 | INFO  | Checking status of 'OpenStack Octavia Amphora 2025-10-08' 2025-10-08 16:12:41.617661 | orchestrator | 2025-10-08 16:12:41 | INFO  | Checking visibility of 'OpenStack Octavia Amphora 2025-10-08' 2025-10-08 16:12:41.617696 | orchestrator | 2025-10-08 16:12:41 | INFO  | Processing image 'Cirros 0.6.3' (removal candidate) 2025-10-08 16:12:41.617707 | orchestrator | 2025-10-08 16:12:41 | WARNING  | No image definition found for 'Cirros 0.6.3', image will be ignored 2025-10-08 16:12:41.617717 | orchestrator | 2025-10-08 16:12:41 | INFO  | Processing image 'Cirros 0.6.2' (removal candidate) 2025-10-08 16:12:41.617728 | orchestrator | 2025-10-08 16:12:41 | WARNING  | No image definition found for 'Cirros 0.6.2', image will be ignored 2025-10-08 16:12:42.073143 | orchestrator | ok: Runtime: 0:03:16.177746 2025-10-08 16:12:42.089493 | 2025-10-08 16:12:42.089612 | TASK [Run checks] 2025-10-08 16:12:42.765237 | orchestrator | + set -e 2025-10-08 16:12:42.765418 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-10-08 16:12:42.765442 | orchestrator | ++ export INTERACTIVE=false 2025-10-08 16:12:42.765464 | orchestrator | ++ INTERACTIVE=false 2025-10-08 16:12:42.765478 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-10-08 16:12:42.765490 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-10-08 16:12:42.765505 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-10-08 16:12:42.766680 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-10-08 16:12:42.773822 | orchestrator | 2025-10-08 16:12:42.773894 | orchestrator | # CHECK 2025-10-08 16:12:42.773907 | orchestrator | 2025-10-08 16:12:42.773919 | orchestrator | ++ export MANAGER_VERSION=latest 2025-10-08 16:12:42.773936 | orchestrator | ++ MANAGER_VERSION=latest 2025-10-08 16:12:42.773947 | orchestrator | + echo 2025-10-08 16:12:42.773958 | orchestrator | + echo '# CHECK' 2025-10-08 16:12:42.773969 | orchestrator | + echo 2025-10-08 16:12:42.773982 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-10-08 16:12:42.774996 | orchestrator | ++ semver latest 5.0.0 2025-10-08 16:12:42.839439 | orchestrator | 2025-10-08 16:12:42.839501 | orchestrator | + [[ -1 -eq -1 ]] 2025-10-08 16:12:42.839516 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-10-08 16:12:42.839528 | orchestrator | + echo 2025-10-08 16:12:42.839540 | orchestrator | + echo '## Containers @ testbed-manager' 2025-10-08 16:12:42.839552 | orchestrator | ## Containers @ testbed-manager 2025-10-08 16:12:42.839563 | orchestrator | 2025-10-08 16:12:42.839574 | orchestrator | + echo 2025-10-08 16:12:42.839584 | orchestrator | + osism container testbed-manager ps 2025-10-08 16:12:45.298809 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-10-08 16:12:45.298897 | orchestrator | 896a80745b53 registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_blackbox_exporter 2025-10-08 16:12:45.298917 | orchestrator | fc9cebc887eb registry.osism.tech/kolla/prometheus-alertmanager:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_alertmanager 2025-10-08 16:12:45.298927 | orchestrator | 97cf3b6fc94c registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_cadvisor 2025-10-08 16:12:45.298943 | orchestrator | 111d90b81bec registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes prometheus_node_exporter 2025-10-08 16:12:45.298953 | orchestrator | e036c30c7fc2 registry.osism.tech/kolla/prometheus-v2-server:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes prometheus_server 2025-10-08 16:12:45.298968 | orchestrator | 900e91c88d89 registry.osism.tech/osism/cephclient:reef "/usr/bin/dumb-init …" 19 minutes ago Up 19 minutes cephclient 2025-10-08 16:12:45.298978 | orchestrator | f5dd217e9f5e registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes cron 2025-10-08 16:12:45.298989 | orchestrator | 836f8f77835d registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes kolla_toolbox 2025-10-08 16:12:45.298998 | orchestrator | d4005799f22f registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 32 minutes ago Up 32 minutes fluentd 2025-10-08 16:12:45.299029 | orchestrator | a802c2c64205 phpmyadmin/phpmyadmin:5.2 "/docker-entrypoint.…" 32 minutes ago Up 32 minutes (healthy) 80/tcp phpmyadmin 2025-10-08 16:12:45.299040 | orchestrator | 705ff9a8778e registry.osism.tech/osism/openstackclient:2024.2 "/usr/bin/dumb-init …" 33 minutes ago Up 32 minutes openstackclient 2025-10-08 16:12:45.299050 | orchestrator | 0cfd182d0ec0 registry.osism.tech/osism/homer:v25.08.1 "/bin/sh /entrypoint…" 33 minutes ago Up 32 minutes (healthy) 8080/tcp homer 2025-10-08 16:12:45.299082 | orchestrator | b10d52837864 registry.osism.tech/dockerhub/ubuntu/squid:6.1-23.10_beta "entrypoint.sh -f /e…" 55 minutes ago Up 55 minutes (healthy) 192.168.16.5:3128->3128/tcp squid 2025-10-08 16:12:45.299093 | orchestrator | a51ad3e9cdd7 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" About an hour ago Up 39 minutes (healthy) manager-inventory_reconciler-1 2025-10-08 16:12:45.299104 | orchestrator | 3681760afdac registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" About an hour ago Up 40 minutes (healthy) ceph-ansible 2025-10-08 16:12:45.299128 | orchestrator | 4f36927e17b4 registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" About an hour ago Up 40 minutes (healthy) osism-ansible 2025-10-08 16:12:45.299144 | orchestrator | 259ccf641c4a registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" About an hour ago Up 40 minutes (healthy) osism-kubernetes 2025-10-08 16:12:45.299155 | orchestrator | 8dd085b6ca9b registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" About an hour ago Up 40 minutes (healthy) kolla-ansible 2025-10-08 16:12:45.299165 | orchestrator | 262dd6029282 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" About an hour ago Up 40 minutes (healthy) 8000/tcp manager-ara-server-1 2025-10-08 16:12:45.299175 | orchestrator | b80fdcecbad4 registry.osism.tech/dockerhub/library/redis:7.4.5-alpine "docker-entrypoint.s…" About an hour ago Up 40 minutes (healthy) 6379/tcp manager-redis-1 2025-10-08 16:12:45.299184 | orchestrator | a8fa71f91b37 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" About an hour ago Up 40 minutes (healthy) manager-listener-1 2025-10-08 16:12:45.299194 | orchestrator | 293c89c0e6cb registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" About an hour ago Up 40 minutes (healthy) manager-flower-1 2025-10-08 16:12:45.299204 | orchestrator | 4ba1e37cd2ad registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" About an hour ago Up 40 minutes (healthy) 192.168.16.5:8000->8000/tcp manager-api-1 2025-10-08 16:12:45.299222 | orchestrator | f5244b52f797 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" About an hour ago Up 40 minutes (healthy) manager-openstack-1 2025-10-08 16:12:45.299232 | orchestrator | e03f84d2b1eb registry.osism.tech/osism/osism-frontend:latest "docker-entrypoint.s…" About an hour ago Up 40 minutes 192.168.16.5:3000->3000/tcp osism-frontend 2025-10-08 16:12:45.299242 | orchestrator | 8a2e8e1be1e6 registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" About an hour ago Up 40 minutes (healthy) osismclient 2025-10-08 16:12:45.299252 | orchestrator | 085e1fe9a0b2 registry.osism.tech/dockerhub/library/mariadb:11.8.3 "docker-entrypoint.s…" About an hour ago Up 40 minutes (healthy) 3306/tcp manager-mariadb-1 2025-10-08 16:12:45.299261 | orchestrator | 575ae449cf2a registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" About an hour ago Up 40 minutes (healthy) manager-beat-1 2025-10-08 16:12:45.299271 | orchestrator | 88839e52a005 registry.osism.tech/dockerhub/library/traefik:v3.5.0 "/entrypoint.sh trae…" About an hour ago Up About an hour (healthy) 192.168.16.5:80->80/tcp, 192.168.16.5:443->443/tcp, 192.168.16.5:8122->8080/tcp traefik 2025-10-08 16:12:45.624005 | orchestrator | 2025-10-08 16:12:45.624105 | orchestrator | ## Images @ testbed-manager 2025-10-08 16:12:45.624120 | orchestrator | 2025-10-08 16:12:45.624132 | orchestrator | + echo 2025-10-08 16:12:45.624144 | orchestrator | + echo '## Images @ testbed-manager' 2025-10-08 16:12:45.624156 | orchestrator | + echo 2025-10-08 16:12:45.624167 | orchestrator | + osism container testbed-manager images 2025-10-08 16:12:47.958438 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-10-08 16:12:47.958507 | orchestrator | phpmyadmin/phpmyadmin 5.2 1d86f8b711e1 11 hours ago 572MB 2025-10-08 16:12:47.958517 | orchestrator | registry.osism.tech/osism/openstackclient 2024.2 db977c5cb951 13 hours ago 244MB 2025-10-08 16:12:47.958537 | orchestrator | registry.osism.tech/osism/cephclient reef 563e84b98854 13 hours ago 453MB 2025-10-08 16:12:47.958546 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 a902ffe2bb38 15 hours ago 674MB 2025-10-08 16:12:47.958553 | orchestrator | registry.osism.tech/kolla/cron 2024.2 1b35da557854 15 hours ago 272MB 2025-10-08 16:12:47.958560 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 e9dd1d6a248e 15 hours ago 585MB 2025-10-08 16:12:47.958568 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 0ba652f35626 15 hours ago 312MB 2025-10-08 16:12:47.958575 | orchestrator | registry.osism.tech/kolla/prometheus-v2-server 2024.2 0134a936a789 15 hours ago 845MB 2025-10-08 16:12:47.958582 | orchestrator | registry.osism.tech/kolla/prometheus-alertmanager 2024.2 8364a32056dd 15 hours ago 410MB 2025-10-08 16:12:47.958590 | orchestrator | registry.osism.tech/kolla/prometheus-blackbox-exporter 2024.2 7491fede1e51 15 hours ago 314MB 2025-10-08 16:12:47.958597 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 e6667577b663 15 hours ago 364MB 2025-10-08 16:12:47.958604 | orchestrator | registry.osism.tech/osism/osism-ansible latest f7ae347ebe2f 16 hours ago 596MB 2025-10-08 16:12:47.958612 | orchestrator | registry.osism.tech/osism/kolla-ansible 2024.2 1975f32b123c 16 hours ago 592MB 2025-10-08 16:12:47.958619 | orchestrator | registry.osism.tech/osism/ceph-ansible reef f3bd0a1d429d 16 hours ago 545MB 2025-10-08 16:12:47.958640 | orchestrator | registry.osism.tech/osism/osism latest f825cc2c46cd 16 hours ago 327MB 2025-10-08 16:12:47.958648 | orchestrator | registry.osism.tech/osism/osism-kubernetes latest 84aa41aebc56 16 hours ago 1.23GB 2025-10-08 16:12:47.958655 | orchestrator | registry.osism.tech/osism/osism-frontend latest 799a68a88f58 16 hours ago 238MB 2025-10-08 16:12:47.958663 | orchestrator | registry.osism.tech/osism/inventory-reconciler latest 9daa9768dd5f 16 hours ago 322MB 2025-10-08 16:12:47.958670 | orchestrator | registry.osism.tech/osism/homer v25.08.1 849a6c620511 11 days ago 11.5MB 2025-10-08 16:12:47.958677 | orchestrator | registry.osism.tech/osism/ara-server 1.7.3 d1b687333f2f 6 weeks ago 275MB 2025-10-08 16:12:47.958685 | orchestrator | registry.osism.tech/dockerhub/library/mariadb 11.8.3 885f31622e75 2 months ago 336MB 2025-10-08 16:12:47.958692 | orchestrator | registry.osism.tech/dockerhub/library/traefik v3.5.0 11cc59587f6a 2 months ago 226MB 2025-10-08 16:12:47.958699 | orchestrator | registry.osism.tech/dockerhub/library/redis 7.4.5-alpine f218e591b571 3 months ago 41.4MB 2025-10-08 16:12:47.958707 | orchestrator | registry.osism.tech/dockerhub/ubuntu/squid 6.1-23.10_beta 34b6bbbcf74b 16 months ago 146MB 2025-10-08 16:12:48.276749 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-10-08 16:12:48.277211 | orchestrator | ++ semver latest 5.0.0 2025-10-08 16:12:48.339233 | orchestrator | 2025-10-08 16:12:48.339268 | orchestrator | ## Containers @ testbed-node-0 2025-10-08 16:12:48.339275 | orchestrator | 2025-10-08 16:12:48.339281 | orchestrator | + [[ -1 -eq -1 ]] 2025-10-08 16:12:48.339286 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-10-08 16:12:48.339292 | orchestrator | + echo 2025-10-08 16:12:48.339298 | orchestrator | + echo '## Containers @ testbed-node-0' 2025-10-08 16:12:48.339304 | orchestrator | + echo 2025-10-08 16:12:48.339309 | orchestrator | + osism container testbed-node-0 ps 2025-10-08 16:12:50.832457 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-10-08 16:12:50.832600 | orchestrator | 50845057d6ea registry.osism.tech/kolla/octavia-worker:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2025-10-08 16:12:50.832622 | orchestrator | 14d80b4bec8f registry.osism.tech/kolla/octavia-housekeeping:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2025-10-08 16:12:50.832639 | orchestrator | 45641e326706 registry.osism.tech/kolla/octavia-health-manager:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2025-10-08 16:12:50.832655 | orchestrator | bc8f3136ccea registry.osism.tech/kolla/octavia-driver-agent:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes octavia_driver_agent 2025-10-08 16:12:50.832671 | orchestrator | 31c111eea38f registry.osism.tech/kolla/octavia-api:2024.2 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_api 2025-10-08 16:12:50.832705 | orchestrator | d8d829d0eb1c registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes grafana 2025-10-08 16:12:50.832723 | orchestrator | 11a59a7de162 registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) magnum_conductor 2025-10-08 16:12:50.832740 | orchestrator | d9af881a8fcb registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) magnum_api 2025-10-08 16:12:50.832750 | orchestrator | b472091908aa registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) placement_api 2025-10-08 16:12:50.832776 | orchestrator | 1d149f4ee6a1 registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_worker 2025-10-08 16:12:50.832787 | orchestrator | b326f9ffbdb8 registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_mdns 2025-10-08 16:12:50.832796 | orchestrator | c7d3e56ea57d registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) nova_novncproxy 2025-10-08 16:12:50.832811 | orchestrator | b2b75031b1ca registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) designate_producer 2025-10-08 16:12:50.832827 | orchestrator | 63f62b417df5 registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 12 minutes ago Up 10 minutes (healthy) nova_conductor 2025-10-08 16:12:50.832844 | orchestrator | 9246b18eb14b registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) designate_central 2025-10-08 16:12:50.832859 | orchestrator | af1642283c14 registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) designate_api 2025-10-08 16:12:50.832876 | orchestrator | 155425064da2 registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) neutron_server 2025-10-08 16:12:50.832892 | orchestrator | 49c65d2b246d registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) designate_backend_bind9 2025-10-08 16:12:50.832908 | orchestrator | ebda71d66d80 registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) nova_api 2025-10-08 16:12:50.832919 | orchestrator | fceeb92a3267 registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) barbican_worker 2025-10-08 16:12:50.832928 | orchestrator | f04ba478dd8a registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) barbican_keystone_listener 2025-10-08 16:12:50.832955 | orchestrator | f49c623c6e20 registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) barbican_api 2025-10-08 16:12:50.832965 | orchestrator | a15a3099917a registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 14 minutes ago Up 10 minutes (healthy) nova_scheduler 2025-10-08 16:12:50.832975 | orchestrator | 0de9073e78a0 registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) cinder_scheduler 2025-10-08 16:12:50.832991 | orchestrator | 90588a60fc7c registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) glance_api 2025-10-08 16:12:50.833001 | orchestrator | 5f914bf12174 registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_elasticsearch_exporter 2025-10-08 16:12:50.833015 | orchestrator | 997c6a3458ca registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) cinder_api 2025-10-08 16:12:50.833025 | orchestrator | 1fb77073881f registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_cadvisor 2025-10-08 16:12:50.833035 | orchestrator | 2f42684e0812 registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_memcached_exporter 2025-10-08 16:12:50.833077 | orchestrator | f684f776d408 registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_mysqld_exporter 2025-10-08 16:12:50.833087 | orchestrator | 68f27996f06a registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes prometheus_node_exporter 2025-10-08 16:12:50.833097 | orchestrator | 17d24878fd21 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 18 minutes ago Up 18 minutes ceph-mgr-testbed-node-0 2025-10-08 16:12:50.833106 | orchestrator | d141983d17a8 registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone 2025-10-08 16:12:50.833116 | orchestrator | 86a4d249d6d6 registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone_fernet 2025-10-08 16:12:50.833126 | orchestrator | 31fbb8fb56ee registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) keystone_ssh 2025-10-08 16:12:50.833136 | orchestrator | d7e9348d9efd registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) horizon 2025-10-08 16:12:50.833145 | orchestrator | a43fdf8c6f45 registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" 21 minutes ago Up 21 minutes (healthy) mariadb 2025-10-08 16:12:50.833155 | orchestrator | 7904e075aee3 registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) opensearch_dashboards 2025-10-08 16:12:50.833165 | orchestrator | 1773405aec3c registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) opensearch 2025-10-08 16:12:50.833175 | orchestrator | 3eef196dbc95 registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes keepalived 2025-10-08 16:12:50.833184 | orchestrator | 65cf8f5593b5 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 25 minutes ago Up 25 minutes ceph-crash-testbed-node-0 2025-10-08 16:12:50.833194 | orchestrator | 78fe076004a8 registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) proxysql 2025-10-08 16:12:50.833204 | orchestrator | 72fdeaa6911b registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) haproxy 2025-10-08 16:12:50.833214 | orchestrator | 82ae9324bb46 registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes ovn_northd 2025-10-08 16:12:50.833235 | orchestrator | 167c681c3dee registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes ovn_sb_db 2025-10-08 16:12:50.833245 | orchestrator | a6e04d87f6c6 registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes ovn_nb_db 2025-10-08 16:12:50.833255 | orchestrator | fae21743a66f registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 29 minutes ago Up 29 minutes ceph-mon-testbed-node-0 2025-10-08 16:12:50.833265 | orchestrator | 5239537a066b registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes ovn_controller 2025-10-08 16:12:50.833285 | orchestrator | a40b81213ac7 registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) rabbitmq 2025-10-08 16:12:50.833295 | orchestrator | d1c80d8c4a4a registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) openvswitch_vswitchd 2025-10-08 16:12:50.833305 | orchestrator | ef9e686714dc registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) openvswitch_db 2025-10-08 16:12:50.833314 | orchestrator | fc885435f1c6 registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) redis_sentinel 2025-10-08 16:12:50.833324 | orchestrator | b06488b0aa72 registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) redis 2025-10-08 16:12:50.833334 | orchestrator | b06bef840a74 registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) memcached 2025-10-08 16:12:50.833343 | orchestrator | e3cdbb2cf854 registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes cron 2025-10-08 16:12:50.833353 | orchestrator | 7d5c7d384478 registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 32 minutes ago Up 32 minutes kolla_toolbox 2025-10-08 16:12:50.833363 | orchestrator | 073066697abb registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 32 minutes ago Up 32 minutes fluentd 2025-10-08 16:12:51.173800 | orchestrator | 2025-10-08 16:12:51.173870 | orchestrator | ## Images @ testbed-node-0 2025-10-08 16:12:51.173883 | orchestrator | 2025-10-08 16:12:51.173895 | orchestrator | + echo 2025-10-08 16:12:51.173907 | orchestrator | + echo '## Images @ testbed-node-0' 2025-10-08 16:12:51.173918 | orchestrator | + echo 2025-10-08 16:12:51.173929 | orchestrator | + osism container testbed-node-0 images 2025-10-08 16:12:53.597702 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-10-08 16:12:53.597826 | orchestrator | registry.osism.tech/osism/ceph-daemon reef affe01d974d8 13 hours ago 1.27GB 2025-10-08 16:12:53.597849 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 e2128038df5a 15 hours ago 280MB 2025-10-08 16:12:53.597867 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 a902ffe2bb38 15 hours ago 674MB 2025-10-08 16:12:53.597884 | orchestrator | registry.osism.tech/kolla/cron 2024.2 1b35da557854 15 hours ago 272MB 2025-10-08 16:12:53.597900 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 bf0601e1d9dd 15 hours ago 329MB 2025-10-08 16:12:53.597917 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 e9dd1d6a248e 15 hours ago 585MB 2025-10-08 16:12:53.597933 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 097234f03734 15 hours ago 1.54GB 2025-10-08 16:12:53.597949 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 9c92a5db068f 15 hours ago 1.51GB 2025-10-08 16:12:53.597986 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 91d2c74858eb 15 hours ago 1.01GB 2025-10-08 16:12:53.598006 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 03b0ef34c08b 15 hours ago 372MB 2025-10-08 16:12:53.598118 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 1862aba89a40 15 hours ago 273MB 2025-10-08 16:12:53.598137 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 9eb1407fd9c4 15 hours ago 283MB 2025-10-08 16:12:53.598154 | orchestrator | registry.osism.tech/kolla/redis 2024.2 f442e4ccd930 15 hours ago 279MB 2025-10-08 16:12:53.598170 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 dfc3d5ad7a0b 15 hours ago 279MB 2025-10-08 16:12:53.598214 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 ef490390cfa1 15 hours ago 1.15GB 2025-10-08 16:12:53.598232 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 90f11a1dc3cb 15 hours ago 288MB 2025-10-08 16:12:53.598249 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 d1023156cdd7 15 hours ago 288MB 2025-10-08 16:12:53.598265 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 0ba652f35626 15 hours ago 312MB 2025-10-08 16:12:53.598282 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 aac14f5199bf 15 hours ago 298MB 2025-10-08 16:12:53.598299 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 0e37577c38c0 15 hours ago 307MB 2025-10-08 16:12:53.598315 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 e6667577b663 15 hours ago 364MB 2025-10-08 16:12:53.598332 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 33549d339512 15 hours ago 305MB 2025-10-08 16:12:53.598349 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 0c177a3a31d3 15 hours ago 454MB 2025-10-08 16:12:53.598365 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 14c8cbf1ce0b 15 hours ago 1.17GB 2025-10-08 16:12:53.598381 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 29d50291c3a1 15 hours ago 1.09GB 2025-10-08 16:12:53.598398 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 94fb9811d29d 15 hours ago 1.05GB 2025-10-08 16:12:53.598415 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 214291e459e9 15 hours ago 1.04GB 2025-10-08 16:12:53.598431 | orchestrator | registry.osism.tech/kolla/aodh-api 2024.2 af1af10562c5 15 hours ago 980MB 2025-10-08 16:12:53.598448 | orchestrator | registry.osism.tech/kolla/aodh-listener 2024.2 b7c391a93bce 15 hours ago 981MB 2025-10-08 16:12:53.598464 | orchestrator | registry.osism.tech/kolla/aodh-evaluator 2024.2 08104990af80 15 hours ago 981MB 2025-10-08 16:12:53.598479 | orchestrator | registry.osism.tech/kolla/aodh-notifier 2024.2 0ff7f0a80f25 15 hours ago 981MB 2025-10-08 16:12:53.598494 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2024.2 1c995604b137 15 hours ago 1.04GB 2025-10-08 16:12:53.598511 | orchestrator | registry.osism.tech/kolla/octavia-api 2024.2 f8e96b956121 15 hours ago 1.06GB 2025-10-08 16:12:53.598527 | orchestrator | registry.osism.tech/kolla/octavia-worker 2024.2 3f45d2d684ee 15 hours ago 1.04GB 2025-10-08 16:12:53.598543 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2024.2 c4abba62f9b1 15 hours ago 1.04GB 2025-10-08 16:12:53.598559 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2024.2 b3ab5aaead30 15 hours ago 1.06GB 2025-10-08 16:12:53.598597 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 771727e63a3d 15 hours ago 1.21GB 2025-10-08 16:12:53.598616 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 30e75f195140 15 hours ago 1.21GB 2025-10-08 16:12:53.598632 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 8a06c53514f5 15 hours ago 1.21GB 2025-10-08 16:12:53.598648 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 c69e7d6545e0 15 hours ago 1.37GB 2025-10-08 16:12:53.598663 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 cc4b238b320d 15 hours ago 1.41GB 2025-10-08 16:12:53.598679 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 9e17733534c9 15 hours ago 1.41GB 2025-10-08 16:12:53.598696 | orchestrator | registry.osism.tech/kolla/ceilometer-central 2024.2 2e69ad3bc2e1 15 hours ago 983MB 2025-10-08 16:12:53.598725 | orchestrator | registry.osism.tech/kolla/ceilometer-notification 2024.2 eb71d12aec7e 15 hours ago 982MB 2025-10-08 16:12:53.598741 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 0a090cc4008f 15 hours ago 982MB 2025-10-08 16:12:53.598758 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 1f7624c10781 15 hours ago 991MB 2025-10-08 16:12:53.598775 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 db5771ee6474 15 hours ago 996MB 2025-10-08 16:12:53.601556 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 ad73d07baf43 15 hours ago 991MB 2025-10-08 16:12:53.601606 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 f48de6e91b44 15 hours ago 990MB 2025-10-08 16:12:53.601614 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 4d72a3b57da3 15 hours ago 996MB 2025-10-08 16:12:53.601621 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 be797e6d257f 15 hours ago 991MB 2025-10-08 16:12:53.601627 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 f530078c76c5 15 hours ago 1.25GB 2025-10-08 16:12:53.601633 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 3ce47eb7e908 15 hours ago 1.13GB 2025-10-08 16:12:53.601639 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 2a03fb3698ee 15 hours ago 998MB 2025-10-08 16:12:53.601646 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 50233cbe9a85 15 hours ago 997MB 2025-10-08 16:12:53.601652 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 c25ce9c67829 15 hours ago 998MB 2025-10-08 16:12:53.601658 | orchestrator | registry.osism.tech/kolla/skyline-console 2024.2 5bc85d7961c4 15 hours ago 1.05GB 2025-10-08 16:12:53.601664 | orchestrator | registry.osism.tech/kolla/skyline-apiserver 2024.2 5c4b8d681b07 15 hours ago 996MB 2025-10-08 16:12:53.601671 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 c693cd13e1a7 15 hours ago 1.1GB 2025-10-08 16:12:53.601677 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 8d61dfdc9cc8 15 hours ago 295MB 2025-10-08 16:12:53.601683 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 5a95ece3086d 15 hours ago 295MB 2025-10-08 16:12:53.601689 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 1dff30622da9 15 hours ago 296MB 2025-10-08 16:12:53.601696 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 91a2b5b4d249 15 hours ago 295MB 2025-10-08 16:12:53.919876 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-10-08 16:12:53.920107 | orchestrator | ++ semver latest 5.0.0 2025-10-08 16:12:53.978870 | orchestrator | 2025-10-08 16:12:53.978923 | orchestrator | ## Containers @ testbed-node-1 2025-10-08 16:12:53.978932 | orchestrator | 2025-10-08 16:12:53.978940 | orchestrator | + [[ -1 -eq -1 ]] 2025-10-08 16:12:53.978949 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-10-08 16:12:53.978957 | orchestrator | + echo 2025-10-08 16:12:53.978965 | orchestrator | + echo '## Containers @ testbed-node-1' 2025-10-08 16:12:53.978973 | orchestrator | + echo 2025-10-08 16:12:53.978981 | orchestrator | + osism container testbed-node-1 ps 2025-10-08 16:12:56.353141 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-10-08 16:12:56.353293 | orchestrator | dc771a378b53 registry.osism.tech/kolla/octavia-worker:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2025-10-08 16:12:56.354094 | orchestrator | 2a5954dfde90 registry.osism.tech/kolla/octavia-housekeeping:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2025-10-08 16:12:56.354121 | orchestrator | 6c43f705abb9 registry.osism.tech/kolla/octavia-health-manager:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2025-10-08 16:12:56.354185 | orchestrator | 8b748dda2998 registry.osism.tech/kolla/octavia-driver-agent:2024.2 "dumb-init --single-…" 5 minutes ago Up 5 minutes octavia_driver_agent 2025-10-08 16:12:56.354198 | orchestrator | 5c3866200eca registry.osism.tech/kolla/octavia-api:2024.2 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_api 2025-10-08 16:12:56.354209 | orchestrator | 466ea8a00c96 registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes grafana 2025-10-08 16:12:56.354220 | orchestrator | 0974c269cdee registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) magnum_conductor 2025-10-08 16:12:56.354231 | orchestrator | 26b509654aef registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) magnum_api 2025-10-08 16:12:56.354242 | orchestrator | c54c86f15a73 registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) placement_api 2025-10-08 16:12:56.354252 | orchestrator | 41c321189272 registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_worker 2025-10-08 16:12:56.354263 | orchestrator | 59310aff8432 registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) designate_mdns 2025-10-08 16:12:56.354274 | orchestrator | 805fb3f72e79 registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) nova_novncproxy 2025-10-08 16:12:56.354284 | orchestrator | 6e65192fe780 registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) designate_producer 2025-10-08 16:12:56.354300 | orchestrator | 6c10a2dbaa20 registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 12 minutes ago Up 10 minutes (healthy) nova_conductor 2025-10-08 16:12:56.354312 | orchestrator | 7203b3642877 registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) designate_central 2025-10-08 16:12:56.354323 | orchestrator | 43943c023160 registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) neutron_server 2025-10-08 16:12:56.354334 | orchestrator | 7844353e00f7 registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) designate_api 2025-10-08 16:12:56.354344 | orchestrator | c2a17127bcdf registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) designate_backend_bind9 2025-10-08 16:12:56.354356 | orchestrator | 7c1515f92df2 registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 14 minutes ago Up 13 minutes (healthy) nova_api 2025-10-08 16:12:56.354367 | orchestrator | 389fbca04c2c registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) barbican_worker 2025-10-08 16:12:56.354387 | orchestrator | 4576e6ea2a43 registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) barbican_keystone_listener 2025-10-08 16:12:56.354417 | orchestrator | c6bdf8b2025d registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 14 minutes ago Up 10 minutes (healthy) nova_scheduler 2025-10-08 16:12:56.354431 | orchestrator | 12c1ca14d2ad registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) barbican_api 2025-10-08 16:12:56.354485 | orchestrator | 42fc2a4b5dad registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) cinder_scheduler 2025-10-08 16:12:56.354499 | orchestrator | e422c9020e96 registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) glance_api 2025-10-08 16:12:56.354512 | orchestrator | 939e600dba58 registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) cinder_api 2025-10-08 16:12:56.354524 | orchestrator | afa58f0f86c8 registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_elasticsearch_exporter 2025-10-08 16:12:56.354538 | orchestrator | ff5b187c2984 registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_cadvisor 2025-10-08 16:12:56.354550 | orchestrator | 1103bdc9e78a registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_memcached_exporter 2025-10-08 16:12:56.354563 | orchestrator | ccf32433d523 registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes prometheus_mysqld_exporter 2025-10-08 16:12:56.354576 | orchestrator | 80f2053ce31e registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes prometheus_node_exporter 2025-10-08 16:12:56.354588 | orchestrator | 4b2c2f8a247d registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 18 minutes ago Up 18 minutes ceph-mgr-testbed-node-1 2025-10-08 16:12:56.354600 | orchestrator | 4b29d197b673 registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone 2025-10-08 16:12:56.354613 | orchestrator | 89067aea003d registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) horizon 2025-10-08 16:12:56.354625 | orchestrator | d0ce0967a1eb registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone_fernet 2025-10-08 16:12:56.354637 | orchestrator | 8b70fbfcd615 registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) keystone_ssh 2025-10-08 16:12:56.354649 | orchestrator | 2896c7b0cdc5 registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) opensearch_dashboards 2025-10-08 16:12:56.354662 | orchestrator | 0ca168cca967 registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" 23 minutes ago Up 23 minutes (healthy) mariadb 2025-10-08 16:12:56.354675 | orchestrator | 4abb87e18cbf registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) opensearch 2025-10-08 16:12:56.354686 | orchestrator | f85e1711ad47 registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes keepalived 2025-10-08 16:12:56.354699 | orchestrator | 16a95d101acf registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 25 minutes ago Up 25 minutes ceph-crash-testbed-node-1 2025-10-08 16:12:56.354711 | orchestrator | 01352c4636a8 registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) proxysql 2025-10-08 16:12:56.354731 | orchestrator | a6807bf17b0b registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) haproxy 2025-10-08 16:12:56.354750 | orchestrator | d4793a5a8bb0 registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" 29 minutes ago Up 28 minutes ovn_northd 2025-10-08 16:12:56.354773 | orchestrator | 1a2c87f08631 registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" 29 minutes ago Up 28 minutes ovn_sb_db 2025-10-08 16:12:56.354785 | orchestrator | 341b80fbc30e registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" 29 minutes ago Up 28 minutes ovn_nb_db 2025-10-08 16:12:56.354796 | orchestrator | 78d9c8db1380 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 29 minutes ago Up 29 minutes ceph-mon-testbed-node-1 2025-10-08 16:12:56.354807 | orchestrator | d24d2cb6772a registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) rabbitmq 2025-10-08 16:12:56.354817 | orchestrator | b95661e90460 registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes ovn_controller 2025-10-08 16:12:56.354828 | orchestrator | e507a8d81c52 registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) openvswitch_vswitchd 2025-10-08 16:12:56.354839 | orchestrator | 9fd2129ff8d2 registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) openvswitch_db 2025-10-08 16:12:56.354850 | orchestrator | 0683994a6e09 registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) redis_sentinel 2025-10-08 16:12:56.354861 | orchestrator | 37454a0010f0 registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) redis 2025-10-08 16:12:56.354872 | orchestrator | e9028f2f427f registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) memcached 2025-10-08 16:12:56.354883 | orchestrator | 5a30e45b16cd registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes cron 2025-10-08 16:12:56.354894 | orchestrator | 54ea8bb1d0bd registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes kolla_toolbox 2025-10-08 16:12:56.354905 | orchestrator | 332d93a6b638 registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 32 minutes ago Up 32 minutes fluentd 2025-10-08 16:12:56.736033 | orchestrator | 2025-10-08 16:12:56.736145 | orchestrator | ## Images @ testbed-node-1 2025-10-08 16:12:56.736154 | orchestrator | 2025-10-08 16:12:56.736160 | orchestrator | + echo 2025-10-08 16:12:56.736167 | orchestrator | + echo '## Images @ testbed-node-1' 2025-10-08 16:12:56.736175 | orchestrator | + echo 2025-10-08 16:12:56.736181 | orchestrator | + osism container testbed-node-1 images 2025-10-08 16:12:59.145877 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-10-08 16:12:59.145976 | orchestrator | registry.osism.tech/osism/ceph-daemon reef affe01d974d8 13 hours ago 1.27GB 2025-10-08 16:12:59.145991 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 e2128038df5a 15 hours ago 280MB 2025-10-08 16:12:59.146003 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 a902ffe2bb38 15 hours ago 674MB 2025-10-08 16:12:59.146014 | orchestrator | registry.osism.tech/kolla/cron 2024.2 1b35da557854 15 hours ago 272MB 2025-10-08 16:12:59.146139 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 bf0601e1d9dd 15 hours ago 329MB 2025-10-08 16:12:59.146152 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 e9dd1d6a248e 15 hours ago 585MB 2025-10-08 16:12:59.146163 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 097234f03734 15 hours ago 1.54GB 2025-10-08 16:12:59.146174 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 9c92a5db068f 15 hours ago 1.51GB 2025-10-08 16:12:59.146186 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 91d2c74858eb 15 hours ago 1.01GB 2025-10-08 16:12:59.146197 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 03b0ef34c08b 15 hours ago 372MB 2025-10-08 16:12:59.146207 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 1862aba89a40 15 hours ago 273MB 2025-10-08 16:12:59.146218 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 9eb1407fd9c4 15 hours ago 283MB 2025-10-08 16:12:59.146229 | orchestrator | registry.osism.tech/kolla/redis 2024.2 f442e4ccd930 15 hours ago 279MB 2025-10-08 16:12:59.146240 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 dfc3d5ad7a0b 15 hours ago 279MB 2025-10-08 16:12:59.146251 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 ef490390cfa1 15 hours ago 1.15GB 2025-10-08 16:12:59.146261 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 90f11a1dc3cb 15 hours ago 288MB 2025-10-08 16:12:59.146272 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 d1023156cdd7 15 hours ago 288MB 2025-10-08 16:12:59.146283 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 0ba652f35626 15 hours ago 312MB 2025-10-08 16:12:59.146294 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 aac14f5199bf 15 hours ago 298MB 2025-10-08 16:12:59.146322 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 0e37577c38c0 15 hours ago 307MB 2025-10-08 16:12:59.146338 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 e6667577b663 15 hours ago 364MB 2025-10-08 16:12:59.146349 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 33549d339512 15 hours ago 305MB 2025-10-08 16:12:59.146360 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 0c177a3a31d3 15 hours ago 454MB 2025-10-08 16:12:59.146371 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 14c8cbf1ce0b 15 hours ago 1.17GB 2025-10-08 16:12:59.146381 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 29d50291c3a1 15 hours ago 1.09GB 2025-10-08 16:12:59.146392 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 94fb9811d29d 15 hours ago 1.05GB 2025-10-08 16:12:59.146403 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 214291e459e9 15 hours ago 1.04GB 2025-10-08 16:12:59.146414 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2024.2 1c995604b137 15 hours ago 1.04GB 2025-10-08 16:12:59.146427 | orchestrator | registry.osism.tech/kolla/octavia-api 2024.2 f8e96b956121 15 hours ago 1.06GB 2025-10-08 16:12:59.146440 | orchestrator | registry.osism.tech/kolla/octavia-worker 2024.2 3f45d2d684ee 15 hours ago 1.04GB 2025-10-08 16:12:59.146453 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2024.2 c4abba62f9b1 15 hours ago 1.04GB 2025-10-08 16:12:59.146465 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2024.2 b3ab5aaead30 15 hours ago 1.06GB 2025-10-08 16:12:59.146478 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 771727e63a3d 15 hours ago 1.21GB 2025-10-08 16:12:59.146490 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 30e75f195140 15 hours ago 1.21GB 2025-10-08 16:12:59.146509 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 8a06c53514f5 15 hours ago 1.21GB 2025-10-08 16:12:59.146522 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 c69e7d6545e0 15 hours ago 1.37GB 2025-10-08 16:12:59.146552 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 cc4b238b320d 15 hours ago 1.41GB 2025-10-08 16:12:59.146566 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 9e17733534c9 15 hours ago 1.41GB 2025-10-08 16:12:59.146579 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 0a090cc4008f 15 hours ago 982MB 2025-10-08 16:12:59.146591 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 1f7624c10781 15 hours ago 991MB 2025-10-08 16:12:59.146604 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 db5771ee6474 15 hours ago 996MB 2025-10-08 16:12:59.146616 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 ad73d07baf43 15 hours ago 991MB 2025-10-08 16:12:59.146628 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 f48de6e91b44 15 hours ago 990MB 2025-10-08 16:12:59.146641 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 4d72a3b57da3 15 hours ago 996MB 2025-10-08 16:12:59.146654 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 be797e6d257f 15 hours ago 991MB 2025-10-08 16:12:59.146666 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 f530078c76c5 15 hours ago 1.25GB 2025-10-08 16:12:59.146679 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 3ce47eb7e908 15 hours ago 1.13GB 2025-10-08 16:12:59.146692 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 2a03fb3698ee 15 hours ago 998MB 2025-10-08 16:12:59.146705 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 50233cbe9a85 15 hours ago 997MB 2025-10-08 16:12:59.146717 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 c25ce9c67829 15 hours ago 998MB 2025-10-08 16:12:59.146730 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 c693cd13e1a7 15 hours ago 1.1GB 2025-10-08 16:12:59.146743 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 8d61dfdc9cc8 15 hours ago 295MB 2025-10-08 16:12:59.146755 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 5a95ece3086d 15 hours ago 295MB 2025-10-08 16:12:59.146768 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 1dff30622da9 15 hours ago 296MB 2025-10-08 16:12:59.146780 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 91a2b5b4d249 15 hours ago 295MB 2025-10-08 16:12:59.609250 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-10-08 16:12:59.609824 | orchestrator | ++ semver latest 5.0.0 2025-10-08 16:12:59.677673 | orchestrator | 2025-10-08 16:12:59.677710 | orchestrator | ## Containers @ testbed-node-2 2025-10-08 16:12:59.677722 | orchestrator | 2025-10-08 16:12:59.677733 | orchestrator | + [[ -1 -eq -1 ]] 2025-10-08 16:12:59.677744 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-10-08 16:12:59.677755 | orchestrator | + echo 2025-10-08 16:12:59.677767 | orchestrator | + echo '## Containers @ testbed-node-2' 2025-10-08 16:12:59.677779 | orchestrator | + echo 2025-10-08 16:12:59.677790 | orchestrator | + osism container testbed-node-2 ps 2025-10-08 16:13:02.125005 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-10-08 16:13:02.125183 | orchestrator | 04008eac7f07 registry.osism.tech/kolla/octavia-worker:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2025-10-08 16:13:02.125202 | orchestrator | 66dc25b7f47d registry.osism.tech/kolla/octavia-housekeeping:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2025-10-08 16:13:02.125238 | orchestrator | 13cef1b2b60f registry.osism.tech/kolla/octavia-health-manager:2024.2 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_health_manager 2025-10-08 16:13:02.125251 | orchestrator | abb03b1a75b4 registry.osism.tech/kolla/octavia-driver-agent:2024.2 "dumb-init --single-…" 5 minutes ago Up 5 minutes octavia_driver_agent 2025-10-08 16:13:02.125262 | orchestrator | 29cd5597807c registry.osism.tech/kolla/octavia-api:2024.2 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_api 2025-10-08 16:13:02.125292 | orchestrator | be5cb32d28a2 registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes grafana 2025-10-08 16:13:02.125304 | orchestrator | 08a663dc0455 registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) magnum_conductor 2025-10-08 16:13:02.125315 | orchestrator | 7e8e025d4c65 registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) magnum_api 2025-10-08 16:13:02.125326 | orchestrator | 1e315c1c861e registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) placement_api 2025-10-08 16:13:02.125337 | orchestrator | 1d40b05debf0 registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_worker 2025-10-08 16:13:02.125348 | orchestrator | 157b27eda80a registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) designate_mdns 2025-10-08 16:13:02.125359 | orchestrator | 9bce7cfddd40 registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) nova_novncproxy 2025-10-08 16:13:02.125370 | orchestrator | 4ccc14366250 registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) designate_producer 2025-10-08 16:13:02.125381 | orchestrator | 2a77d47ef915 registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 12 minutes ago Up 10 minutes (healthy) nova_conductor 2025-10-08 16:13:02.125392 | orchestrator | 7825f3e9a7da registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) designate_central 2025-10-08 16:13:02.125402 | orchestrator | 6dbe81005587 registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) neutron_server 2025-10-08 16:13:02.125413 | orchestrator | 46bfc8afdf9d registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) designate_api 2025-10-08 16:13:02.125424 | orchestrator | bc69b439dada registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) designate_backend_bind9 2025-10-08 16:13:02.125435 | orchestrator | 7e7e5c20ff6b registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) nova_api 2025-10-08 16:13:02.125446 | orchestrator | 4f7b75484378 registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) barbican_worker 2025-10-08 16:13:02.125457 | orchestrator | c86249d17377 registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 14 minutes ago Up 10 minutes (healthy) nova_scheduler 2025-10-08 16:13:02.125485 | orchestrator | 69e65877d924 registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) barbican_keystone_listener 2025-10-08 16:13:02.125505 | orchestrator | 914557cf4752 registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) barbican_api 2025-10-08 16:13:02.125516 | orchestrator | 7f6923c0d1f1 registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) cinder_scheduler 2025-10-08 16:13:02.125527 | orchestrator | 3f26f335734a registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) glance_api 2025-10-08 16:13:02.125538 | orchestrator | 350b7dea3e48 registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) cinder_api 2025-10-08 16:13:02.125549 | orchestrator | 564cd1082ca7 registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_elasticsearch_exporter 2025-10-08 16:13:02.125563 | orchestrator | 22d965e8b468 registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_cadvisor 2025-10-08 16:13:02.125575 | orchestrator | eecc419b8b6d registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes prometheus_memcached_exporter 2025-10-08 16:13:02.125588 | orchestrator | 5f8b9a7db279 registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes prometheus_mysqld_exporter 2025-10-08 16:13:02.125601 | orchestrator | 8144e6fe4ec6 registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes prometheus_node_exporter 2025-10-08 16:13:02.125613 | orchestrator | 152d06f8afdd registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 17 minutes ago Up 17 minutes ceph-mgr-testbed-node-2 2025-10-08 16:13:02.125626 | orchestrator | 7243cc0f40ac registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone 2025-10-08 16:13:02.125638 | orchestrator | 1442fe2ef87d registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) keystone_fernet 2025-10-08 16:13:02.125650 | orchestrator | d4912d39d7cd registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) horizon 2025-10-08 16:13:02.125662 | orchestrator | 068f967a110a registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) keystone_ssh 2025-10-08 16:13:02.125675 | orchestrator | 39443caa86ec registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) opensearch_dashboards 2025-10-08 16:13:02.125687 | orchestrator | 93d25da2bcf3 registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" 22 minutes ago Up 22 minutes (healthy) mariadb 2025-10-08 16:13:02.125705 | orchestrator | aca6dd10713e registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) opensearch 2025-10-08 16:13:02.125719 | orchestrator | d841f2e4ea72 registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes keepalived 2025-10-08 16:13:02.125732 | orchestrator | 707c48a90622 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 25 minutes ago Up 25 minutes ceph-crash-testbed-node-2 2025-10-08 16:13:02.125750 | orchestrator | c98459f2d579 registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) proxysql 2025-10-08 16:13:02.125764 | orchestrator | 4c577afd30d2 registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) haproxy 2025-10-08 16:13:02.125777 | orchestrator | 3389dfbdeefc registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" 29 minutes ago Up 28 minutes ovn_northd 2025-10-08 16:13:02.125796 | orchestrator | a5446d8d50bc registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" 29 minutes ago Up 28 minutes ovn_sb_db 2025-10-08 16:13:02.125814 | orchestrator | 1e375f4fd1ba registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" 29 minutes ago Up 28 minutes ovn_nb_db 2025-10-08 16:13:02.125828 | orchestrator | 84dc132a099d registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) rabbitmq 2025-10-08 16:13:02.125840 | orchestrator | a01f6c2a66ce registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 29 minutes ago Up 29 minutes ceph-mon-testbed-node-2 2025-10-08 16:13:02.125853 | orchestrator | 2d7d91755f4d registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes ovn_controller 2025-10-08 16:13:02.125865 | orchestrator | 5357acb00a7e registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" 31 minutes ago Up 30 minutes (healthy) openvswitch_vswitchd 2025-10-08 16:13:02.125877 | orchestrator | 679e365faaea registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) openvswitch_db 2025-10-08 16:13:02.125890 | orchestrator | 4bcba7fe0cb8 registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) redis_sentinel 2025-10-08 16:13:02.125903 | orchestrator | f148e383dde4 registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) redis 2025-10-08 16:13:02.125916 | orchestrator | 19d0879c098c registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) memcached 2025-10-08 16:13:02.125930 | orchestrator | 01851fc77214 registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes cron 2025-10-08 16:13:02.125940 | orchestrator | 35221fde98f4 registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 32 minutes ago Up 32 minutes kolla_toolbox 2025-10-08 16:13:02.125952 | orchestrator | 33cbe36a2a23 registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 32 minutes ago Up 32 minutes fluentd 2025-10-08 16:13:02.448836 | orchestrator | 2025-10-08 16:13:02.448930 | orchestrator | ## Images @ testbed-node-2 2025-10-08 16:13:02.448945 | orchestrator | 2025-10-08 16:13:02.448957 | orchestrator | + echo 2025-10-08 16:13:02.448969 | orchestrator | + echo '## Images @ testbed-node-2' 2025-10-08 16:13:02.448981 | orchestrator | + echo 2025-10-08 16:13:02.448992 | orchestrator | + osism container testbed-node-2 images 2025-10-08 16:13:04.919611 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-10-08 16:13:04.919713 | orchestrator | registry.osism.tech/osism/ceph-daemon reef affe01d974d8 13 hours ago 1.27GB 2025-10-08 16:13:04.919727 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 e2128038df5a 15 hours ago 280MB 2025-10-08 16:13:04.919739 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 a902ffe2bb38 15 hours ago 674MB 2025-10-08 16:13:04.919772 | orchestrator | registry.osism.tech/kolla/cron 2024.2 1b35da557854 15 hours ago 272MB 2025-10-08 16:13:04.919783 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 bf0601e1d9dd 15 hours ago 329MB 2025-10-08 16:13:04.919794 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 e9dd1d6a248e 15 hours ago 585MB 2025-10-08 16:13:04.919805 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 097234f03734 15 hours ago 1.54GB 2025-10-08 16:13:04.919815 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 9c92a5db068f 15 hours ago 1.51GB 2025-10-08 16:13:04.919826 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 91d2c74858eb 15 hours ago 1.01GB 2025-10-08 16:13:04.919837 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 03b0ef34c08b 15 hours ago 372MB 2025-10-08 16:13:04.919848 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 1862aba89a40 15 hours ago 273MB 2025-10-08 16:13:04.919858 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 9eb1407fd9c4 15 hours ago 283MB 2025-10-08 16:13:04.919869 | orchestrator | registry.osism.tech/kolla/redis 2024.2 f442e4ccd930 15 hours ago 279MB 2025-10-08 16:13:04.919880 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 dfc3d5ad7a0b 15 hours ago 279MB 2025-10-08 16:13:04.919890 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 ef490390cfa1 15 hours ago 1.15GB 2025-10-08 16:13:04.919901 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 d1023156cdd7 15 hours ago 288MB 2025-10-08 16:13:04.919912 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 90f11a1dc3cb 15 hours ago 288MB 2025-10-08 16:13:04.919923 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 0ba652f35626 15 hours ago 312MB 2025-10-08 16:13:04.919933 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 aac14f5199bf 15 hours ago 298MB 2025-10-08 16:13:04.919944 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 0e37577c38c0 15 hours ago 307MB 2025-10-08 16:13:04.919955 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 e6667577b663 15 hours ago 364MB 2025-10-08 16:13:04.919965 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 33549d339512 15 hours ago 305MB 2025-10-08 16:13:04.919976 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 0c177a3a31d3 15 hours ago 454MB 2025-10-08 16:13:04.919987 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 14c8cbf1ce0b 15 hours ago 1.17GB 2025-10-08 16:13:04.919997 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 29d50291c3a1 15 hours ago 1.09GB 2025-10-08 16:13:04.920008 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 94fb9811d29d 15 hours ago 1.05GB 2025-10-08 16:13:04.920019 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 214291e459e9 15 hours ago 1.04GB 2025-10-08 16:13:04.920084 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2024.2 1c995604b137 15 hours ago 1.04GB 2025-10-08 16:13:04.920097 | orchestrator | registry.osism.tech/kolla/octavia-api 2024.2 f8e96b956121 15 hours ago 1.06GB 2025-10-08 16:13:04.920108 | orchestrator | registry.osism.tech/kolla/octavia-worker 2024.2 3f45d2d684ee 15 hours ago 1.04GB 2025-10-08 16:13:04.920138 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2024.2 c4abba62f9b1 15 hours ago 1.04GB 2025-10-08 16:13:04.920152 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2024.2 b3ab5aaead30 15 hours ago 1.06GB 2025-10-08 16:13:04.920173 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 771727e63a3d 15 hours ago 1.21GB 2025-10-08 16:13:04.920186 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 30e75f195140 15 hours ago 1.21GB 2025-10-08 16:13:04.920198 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 8a06c53514f5 15 hours ago 1.21GB 2025-10-08 16:13:04.920211 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 c69e7d6545e0 15 hours ago 1.37GB 2025-10-08 16:13:04.920242 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 cc4b238b320d 15 hours ago 1.41GB 2025-10-08 16:13:04.920255 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 9e17733534c9 15 hours ago 1.41GB 2025-10-08 16:13:04.920268 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 0a090cc4008f 15 hours ago 982MB 2025-10-08 16:13:04.920281 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 1f7624c10781 15 hours ago 991MB 2025-10-08 16:13:04.920297 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 db5771ee6474 15 hours ago 996MB 2025-10-08 16:13:04.920318 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 ad73d07baf43 15 hours ago 991MB 2025-10-08 16:13:04.920337 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 f48de6e91b44 15 hours ago 990MB 2025-10-08 16:13:04.920356 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 4d72a3b57da3 15 hours ago 996MB 2025-10-08 16:13:04.920374 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 be797e6d257f 15 hours ago 991MB 2025-10-08 16:13:04.920393 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 f530078c76c5 15 hours ago 1.25GB 2025-10-08 16:13:04.920413 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 3ce47eb7e908 15 hours ago 1.13GB 2025-10-08 16:13:04.920427 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 2a03fb3698ee 15 hours ago 998MB 2025-10-08 16:13:04.920445 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 50233cbe9a85 15 hours ago 997MB 2025-10-08 16:13:04.920463 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 c25ce9c67829 15 hours ago 998MB 2025-10-08 16:13:04.920482 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 c693cd13e1a7 15 hours ago 1.1GB 2025-10-08 16:13:04.920500 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 8d61dfdc9cc8 15 hours ago 295MB 2025-10-08 16:13:04.920519 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 1dff30622da9 15 hours ago 296MB 2025-10-08 16:13:04.920538 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 5a95ece3086d 15 hours ago 295MB 2025-10-08 16:13:04.920559 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 91a2b5b4d249 15 hours ago 295MB 2025-10-08 16:13:05.240089 | orchestrator | + sh -c /opt/configuration/scripts/check-services.sh 2025-10-08 16:13:05.246845 | orchestrator | + set -e 2025-10-08 16:13:05.246897 | orchestrator | + source /opt/manager-vars.sh 2025-10-08 16:13:05.247917 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-10-08 16:13:05.247941 | orchestrator | ++ NUMBER_OF_NODES=6 2025-10-08 16:13:05.247953 | orchestrator | ++ export CEPH_VERSION=reef 2025-10-08 16:13:05.247963 | orchestrator | ++ CEPH_VERSION=reef 2025-10-08 16:13:05.247975 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-10-08 16:13:05.247987 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-10-08 16:13:05.247998 | orchestrator | ++ export MANAGER_VERSION=latest 2025-10-08 16:13:05.248009 | orchestrator | ++ MANAGER_VERSION=latest 2025-10-08 16:13:05.248019 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-10-08 16:13:05.248315 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-10-08 16:13:05.248341 | orchestrator | ++ export ARA=false 2025-10-08 16:13:05.248353 | orchestrator | ++ ARA=false 2025-10-08 16:13:05.248371 | orchestrator | ++ export DEPLOY_MODE=manager 2025-10-08 16:13:05.248393 | orchestrator | ++ DEPLOY_MODE=manager 2025-10-08 16:13:05.248444 | orchestrator | ++ export TEMPEST=false 2025-10-08 16:13:05.248464 | orchestrator | ++ TEMPEST=false 2025-10-08 16:13:05.248482 | orchestrator | ++ export IS_ZUUL=true 2025-10-08 16:13:05.248501 | orchestrator | ++ IS_ZUUL=true 2025-10-08 16:13:05.248520 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.175 2025-10-08 16:13:05.248540 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.175 2025-10-08 16:13:05.248560 | orchestrator | ++ export EXTERNAL_API=false 2025-10-08 16:13:05.248575 | orchestrator | ++ EXTERNAL_API=false 2025-10-08 16:13:05.248586 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-10-08 16:13:05.248596 | orchestrator | ++ IMAGE_USER=ubuntu 2025-10-08 16:13:05.248607 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-10-08 16:13:05.248618 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-10-08 16:13:05.248629 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-10-08 16:13:05.248730 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-10-08 16:13:05.248751 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-10-08 16:13:05.248763 | orchestrator | + sh -c /opt/configuration/scripts/check/100-ceph-with-ansible.sh 2025-10-08 16:13:05.259752 | orchestrator | + set -e 2025-10-08 16:13:05.259788 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-10-08 16:13:05.259800 | orchestrator | ++ export INTERACTIVE=false 2025-10-08 16:13:05.259812 | orchestrator | ++ INTERACTIVE=false 2025-10-08 16:13:05.259823 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-10-08 16:13:05.259833 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-10-08 16:13:05.259844 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-10-08 16:13:05.259856 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-10-08 16:13:05.266825 | orchestrator | 2025-10-08 16:13:05.266856 | orchestrator | # Ceph status 2025-10-08 16:13:05.266867 | orchestrator | 2025-10-08 16:13:05.266878 | orchestrator | ++ export MANAGER_VERSION=latest 2025-10-08 16:13:05.266889 | orchestrator | ++ MANAGER_VERSION=latest 2025-10-08 16:13:05.266900 | orchestrator | + echo 2025-10-08 16:13:05.266911 | orchestrator | + echo '# Ceph status' 2025-10-08 16:13:05.266922 | orchestrator | + echo 2025-10-08 16:13:05.266933 | orchestrator | + ceph -s 2025-10-08 16:13:05.886839 | orchestrator | cluster: 2025-10-08 16:13:05.886934 | orchestrator | id: 11111111-1111-1111-1111-111111111111 2025-10-08 16:13:05.886950 | orchestrator | health: HEALTH_OK 2025-10-08 16:13:05.886963 | orchestrator | 2025-10-08 16:13:05.886975 | orchestrator | services: 2025-10-08 16:13:05.886987 | orchestrator | mon: 3 daemons, quorum testbed-node-0,testbed-node-1,testbed-node-2 (age 29m) 2025-10-08 16:13:05.886999 | orchestrator | mgr: testbed-node-0(active, since 17m), standbys: testbed-node-1, testbed-node-2 2025-10-08 16:13:05.887011 | orchestrator | mds: 1/1 daemons up, 2 standby 2025-10-08 16:13:05.887023 | orchestrator | osd: 6 osds: 6 up (since 26m), 6 in (since 26m) 2025-10-08 16:13:05.887063 | orchestrator | rgw: 3 daemons active (3 hosts, 1 zones) 2025-10-08 16:13:05.887074 | orchestrator | 2025-10-08 16:13:05.887085 | orchestrator | data: 2025-10-08 16:13:05.887096 | orchestrator | volumes: 1/1 healthy 2025-10-08 16:13:05.887107 | orchestrator | pools: 14 pools, 401 pgs 2025-10-08 16:13:05.887118 | orchestrator | objects: 523 objects, 2.2 GiB 2025-10-08 16:13:05.887129 | orchestrator | usage: 7.1 GiB used, 113 GiB / 120 GiB avail 2025-10-08 16:13:05.887140 | orchestrator | pgs: 401 active+clean 2025-10-08 16:13:05.887151 | orchestrator | 2025-10-08 16:13:05.932246 | orchestrator | 2025-10-08 16:13:05.932302 | orchestrator | # Ceph versions 2025-10-08 16:13:05.932315 | orchestrator | 2025-10-08 16:13:05.932327 | orchestrator | + echo 2025-10-08 16:13:05.932338 | orchestrator | + echo '# Ceph versions' 2025-10-08 16:13:05.932350 | orchestrator | + echo 2025-10-08 16:13:05.932362 | orchestrator | + ceph versions 2025-10-08 16:13:06.575344 | orchestrator | { 2025-10-08 16:13:06.575439 | orchestrator | "mon": { 2025-10-08 16:13:06.575454 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-10-08 16:13:06.575467 | orchestrator | }, 2025-10-08 16:13:06.575478 | orchestrator | "mgr": { 2025-10-08 16:13:06.575489 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-10-08 16:13:06.575500 | orchestrator | }, 2025-10-08 16:13:06.575511 | orchestrator | "osd": { 2025-10-08 16:13:06.575522 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 6 2025-10-08 16:13:06.575533 | orchestrator | }, 2025-10-08 16:13:06.575544 | orchestrator | "mds": { 2025-10-08 16:13:06.575555 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-10-08 16:13:06.575595 | orchestrator | }, 2025-10-08 16:13:06.575606 | orchestrator | "rgw": { 2025-10-08 16:13:06.575617 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-10-08 16:13:06.575628 | orchestrator | }, 2025-10-08 16:13:06.575639 | orchestrator | "overall": { 2025-10-08 16:13:06.575651 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 18 2025-10-08 16:13:06.575662 | orchestrator | } 2025-10-08 16:13:06.575673 | orchestrator | } 2025-10-08 16:13:06.637098 | orchestrator | 2025-10-08 16:13:06.637178 | orchestrator | # Ceph OSD tree 2025-10-08 16:13:06.637197 | orchestrator | 2025-10-08 16:13:06.637214 | orchestrator | + echo 2025-10-08 16:13:06.637231 | orchestrator | + echo '# Ceph OSD tree' 2025-10-08 16:13:06.637248 | orchestrator | + echo 2025-10-08 16:13:06.637265 | orchestrator | + ceph osd df tree 2025-10-08 16:13:07.144730 | orchestrator | ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME 2025-10-08 16:13:07.144853 | orchestrator | -1 0.11691 - 120 GiB 7.1 GiB 6.7 GiB 6 KiB 430 MiB 113 GiB 5.92 1.00 - root default 2025-10-08 16:13:07.144881 | orchestrator | -7 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-3 2025-10-08 16:13:07.145617 | orchestrator | 0 hdd 0.01949 1.00000 20 GiB 1.3 GiB 1.3 GiB 1 KiB 74 MiB 19 GiB 6.71 1.13 193 up osd.0 2025-10-08 16:13:07.145637 | orchestrator | 5 hdd 0.01949 1.00000 20 GiB 1.0 GiB 979 MiB 1 KiB 70 MiB 19 GiB 5.12 0.87 195 up osd.5 2025-10-08 16:13:07.145649 | orchestrator | -3 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-4 2025-10-08 16:13:07.145660 | orchestrator | 1 hdd 0.01949 1.00000 20 GiB 1.3 GiB 1.3 GiB 1 KiB 70 MiB 19 GiB 6.73 1.14 204 up osd.1 2025-10-08 16:13:07.145671 | orchestrator | 4 hdd 0.01949 1.00000 20 GiB 1.0 GiB 971 MiB 1 KiB 74 MiB 19 GiB 5.10 0.86 186 up osd.4 2025-10-08 16:13:07.145682 | orchestrator | -5 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-5 2025-10-08 16:13:07.145693 | orchestrator | 2 hdd 0.01949 1.00000 20 GiB 1.3 GiB 1.2 GiB 1 KiB 74 MiB 19 GiB 6.47 1.09 192 up osd.2 2025-10-08 16:13:07.145704 | orchestrator | 3 hdd 0.01949 1.00000 20 GiB 1.1 GiB 1.0 GiB 1 KiB 70 MiB 19 GiB 5.36 0.91 200 up osd.3 2025-10-08 16:13:07.145715 | orchestrator | TOTAL 120 GiB 7.1 GiB 6.7 GiB 9.3 KiB 430 MiB 113 GiB 5.92 2025-10-08 16:13:07.145727 | orchestrator | MIN/MAX VAR: 0.86/1.14 STDDEV: 0.73 2025-10-08 16:13:07.190365 | orchestrator | 2025-10-08 16:13:07.190432 | orchestrator | # Ceph monitor status 2025-10-08 16:13:07.190446 | orchestrator | 2025-10-08 16:13:07.190457 | orchestrator | + echo 2025-10-08 16:13:07.190469 | orchestrator | + echo '# Ceph monitor status' 2025-10-08 16:13:07.190480 | orchestrator | + echo 2025-10-08 16:13:07.190491 | orchestrator | + ceph mon stat 2025-10-08 16:13:07.918907 | orchestrator | e1: 3 mons at {testbed-node-0=[v2:192.168.16.10:3300/0,v1:192.168.16.10:6789/0],testbed-node-1=[v2:192.168.16.11:3300/0,v1:192.168.16.11:6789/0],testbed-node-2=[v2:192.168.16.12:3300/0,v1:192.168.16.12:6789/0]} removed_ranks: {} disallowed_leaders: {}, election epoch 4, leader 0 testbed-node-0, quorum 0,1,2 testbed-node-0,testbed-node-1,testbed-node-2 2025-10-08 16:13:07.971451 | orchestrator | 2025-10-08 16:13:07.971522 | orchestrator | # Ceph quorum status 2025-10-08 16:13:07.971537 | orchestrator | 2025-10-08 16:13:07.971548 | orchestrator | + echo 2025-10-08 16:13:07.971560 | orchestrator | + echo '# Ceph quorum status' 2025-10-08 16:13:07.971571 | orchestrator | + echo 2025-10-08 16:13:07.972407 | orchestrator | + ceph quorum_status 2025-10-08 16:13:07.972428 | orchestrator | + jq 2025-10-08 16:13:08.677247 | orchestrator | { 2025-10-08 16:13:08.677345 | orchestrator | "election_epoch": 4, 2025-10-08 16:13:08.677360 | orchestrator | "quorum": [ 2025-10-08 16:13:08.677372 | orchestrator | 0, 2025-10-08 16:13:08.677383 | orchestrator | 1, 2025-10-08 16:13:08.677394 | orchestrator | 2 2025-10-08 16:13:08.677405 | orchestrator | ], 2025-10-08 16:13:08.677416 | orchestrator | "quorum_names": [ 2025-10-08 16:13:08.677427 | orchestrator | "testbed-node-0", 2025-10-08 16:13:08.677465 | orchestrator | "testbed-node-1", 2025-10-08 16:13:08.677477 | orchestrator | "testbed-node-2" 2025-10-08 16:13:08.677488 | orchestrator | ], 2025-10-08 16:13:08.677499 | orchestrator | "quorum_leader_name": "testbed-node-0", 2025-10-08 16:13:08.677512 | orchestrator | "quorum_age": 1780, 2025-10-08 16:13:08.677523 | orchestrator | "features": { 2025-10-08 16:13:08.677534 | orchestrator | "quorum_con": "4540138322906710015", 2025-10-08 16:13:08.677545 | orchestrator | "quorum_mon": [ 2025-10-08 16:13:08.677556 | orchestrator | "kraken", 2025-10-08 16:13:08.677566 | orchestrator | "luminous", 2025-10-08 16:13:08.677577 | orchestrator | "mimic", 2025-10-08 16:13:08.677593 | orchestrator | "osdmap-prune", 2025-10-08 16:13:08.677612 | orchestrator | "nautilus", 2025-10-08 16:13:08.677630 | orchestrator | "octopus", 2025-10-08 16:13:08.677648 | orchestrator | "pacific", 2025-10-08 16:13:08.677666 | orchestrator | "elector-pinging", 2025-10-08 16:13:08.677684 | orchestrator | "quincy", 2025-10-08 16:13:08.677704 | orchestrator | "reef" 2025-10-08 16:13:08.677716 | orchestrator | ] 2025-10-08 16:13:08.677727 | orchestrator | }, 2025-10-08 16:13:08.677738 | orchestrator | "monmap": { 2025-10-08 16:13:08.677748 | orchestrator | "epoch": 1, 2025-10-08 16:13:08.677759 | orchestrator | "fsid": "11111111-1111-1111-1111-111111111111", 2025-10-08 16:13:08.677771 | orchestrator | "modified": "2025-10-08T15:43:15.635742Z", 2025-10-08 16:13:08.677782 | orchestrator | "created": "2025-10-08T15:43:15.635742Z", 2025-10-08 16:13:08.677792 | orchestrator | "min_mon_release": 18, 2025-10-08 16:13:08.677803 | orchestrator | "min_mon_release_name": "reef", 2025-10-08 16:13:08.677814 | orchestrator | "election_strategy": 1, 2025-10-08 16:13:08.677981 | orchestrator | "disallowed_leaders: ": "", 2025-10-08 16:13:08.677995 | orchestrator | "stretch_mode": false, 2025-10-08 16:13:08.678006 | orchestrator | "tiebreaker_mon": "", 2025-10-08 16:13:08.678146 | orchestrator | "removed_ranks: ": "", 2025-10-08 16:13:08.678179 | orchestrator | "features": { 2025-10-08 16:13:08.678194 | orchestrator | "persistent": [ 2025-10-08 16:13:08.678205 | orchestrator | "kraken", 2025-10-08 16:13:08.678216 | orchestrator | "luminous", 2025-10-08 16:13:08.678226 | orchestrator | "mimic", 2025-10-08 16:13:08.678237 | orchestrator | "osdmap-prune", 2025-10-08 16:13:08.678247 | orchestrator | "nautilus", 2025-10-08 16:13:08.678257 | orchestrator | "octopus", 2025-10-08 16:13:08.678268 | orchestrator | "pacific", 2025-10-08 16:13:08.678278 | orchestrator | "elector-pinging", 2025-10-08 16:13:08.678288 | orchestrator | "quincy", 2025-10-08 16:13:08.678299 | orchestrator | "reef" 2025-10-08 16:13:08.678310 | orchestrator | ], 2025-10-08 16:13:08.678320 | orchestrator | "optional": [] 2025-10-08 16:13:08.678335 | orchestrator | }, 2025-10-08 16:13:08.678354 | orchestrator | "mons": [ 2025-10-08 16:13:08.678369 | orchestrator | { 2025-10-08 16:13:08.678380 | orchestrator | "rank": 0, 2025-10-08 16:13:08.678391 | orchestrator | "name": "testbed-node-0", 2025-10-08 16:13:08.678401 | orchestrator | "public_addrs": { 2025-10-08 16:13:08.678412 | orchestrator | "addrvec": [ 2025-10-08 16:13:08.678423 | orchestrator | { 2025-10-08 16:13:08.678433 | orchestrator | "type": "v2", 2025-10-08 16:13:08.678444 | orchestrator | "addr": "192.168.16.10:3300", 2025-10-08 16:13:08.678454 | orchestrator | "nonce": 0 2025-10-08 16:13:08.678465 | orchestrator | }, 2025-10-08 16:13:08.678475 | orchestrator | { 2025-10-08 16:13:08.678486 | orchestrator | "type": "v1", 2025-10-08 16:13:08.678496 | orchestrator | "addr": "192.168.16.10:6789", 2025-10-08 16:13:08.678507 | orchestrator | "nonce": 0 2025-10-08 16:13:08.678517 | orchestrator | } 2025-10-08 16:13:08.678533 | orchestrator | ] 2025-10-08 16:13:08.678552 | orchestrator | }, 2025-10-08 16:13:08.678563 | orchestrator | "addr": "192.168.16.10:6789/0", 2025-10-08 16:13:08.678574 | orchestrator | "public_addr": "192.168.16.10:6789/0", 2025-10-08 16:13:08.678584 | orchestrator | "priority": 0, 2025-10-08 16:13:08.678595 | orchestrator | "weight": 0, 2025-10-08 16:13:08.678606 | orchestrator | "crush_location": "{}" 2025-10-08 16:13:08.678616 | orchestrator | }, 2025-10-08 16:13:08.678627 | orchestrator | { 2025-10-08 16:13:08.678637 | orchestrator | "rank": 1, 2025-10-08 16:13:08.678648 | orchestrator | "name": "testbed-node-1", 2025-10-08 16:13:08.678659 | orchestrator | "public_addrs": { 2025-10-08 16:13:08.678669 | orchestrator | "addrvec": [ 2025-10-08 16:13:08.678680 | orchestrator | { 2025-10-08 16:13:08.678690 | orchestrator | "type": "v2", 2025-10-08 16:13:08.678701 | orchestrator | "addr": "192.168.16.11:3300", 2025-10-08 16:13:08.678743 | orchestrator | "nonce": 0 2025-10-08 16:13:08.678766 | orchestrator | }, 2025-10-08 16:13:08.678784 | orchestrator | { 2025-10-08 16:13:08.678802 | orchestrator | "type": "v1", 2025-10-08 16:13:08.678820 | orchestrator | "addr": "192.168.16.11:6789", 2025-10-08 16:13:08.678840 | orchestrator | "nonce": 0 2025-10-08 16:13:08.678859 | orchestrator | } 2025-10-08 16:13:08.678876 | orchestrator | ] 2025-10-08 16:13:08.678895 | orchestrator | }, 2025-10-08 16:13:08.678913 | orchestrator | "addr": "192.168.16.11:6789/0", 2025-10-08 16:13:08.678931 | orchestrator | "public_addr": "192.168.16.11:6789/0", 2025-10-08 16:13:08.678943 | orchestrator | "priority": 0, 2025-10-08 16:13:08.678953 | orchestrator | "weight": 0, 2025-10-08 16:13:08.678964 | orchestrator | "crush_location": "{}" 2025-10-08 16:13:08.678975 | orchestrator | }, 2025-10-08 16:13:08.678985 | orchestrator | { 2025-10-08 16:13:08.678996 | orchestrator | "rank": 2, 2025-10-08 16:13:08.679006 | orchestrator | "name": "testbed-node-2", 2025-10-08 16:13:08.679017 | orchestrator | "public_addrs": { 2025-10-08 16:13:08.679056 | orchestrator | "addrvec": [ 2025-10-08 16:13:08.679067 | orchestrator | { 2025-10-08 16:13:08.679078 | orchestrator | "type": "v2", 2025-10-08 16:13:08.679089 | orchestrator | "addr": "192.168.16.12:3300", 2025-10-08 16:13:08.679099 | orchestrator | "nonce": 0 2025-10-08 16:13:08.679110 | orchestrator | }, 2025-10-08 16:13:08.679120 | orchestrator | { 2025-10-08 16:13:08.679131 | orchestrator | "type": "v1", 2025-10-08 16:13:08.679141 | orchestrator | "addr": "192.168.16.12:6789", 2025-10-08 16:13:08.679152 | orchestrator | "nonce": 0 2025-10-08 16:13:08.679162 | orchestrator | } 2025-10-08 16:13:08.679173 | orchestrator | ] 2025-10-08 16:13:08.679183 | orchestrator | }, 2025-10-08 16:13:08.679194 | orchestrator | "addr": "192.168.16.12:6789/0", 2025-10-08 16:13:08.679204 | orchestrator | "public_addr": "192.168.16.12:6789/0", 2025-10-08 16:13:08.679215 | orchestrator | "priority": 0, 2025-10-08 16:13:08.679226 | orchestrator | "weight": 0, 2025-10-08 16:13:08.679236 | orchestrator | "crush_location": "{}" 2025-10-08 16:13:08.679247 | orchestrator | } 2025-10-08 16:13:08.679257 | orchestrator | ] 2025-10-08 16:13:08.679268 | orchestrator | } 2025-10-08 16:13:08.679278 | orchestrator | } 2025-10-08 16:13:08.679304 | orchestrator | 2025-10-08 16:13:08.679315 | orchestrator | # Ceph free space status 2025-10-08 16:13:08.679326 | orchestrator | 2025-10-08 16:13:08.679337 | orchestrator | + echo 2025-10-08 16:13:08.679348 | orchestrator | + echo '# Ceph free space status' 2025-10-08 16:13:08.679358 | orchestrator | + echo 2025-10-08 16:13:08.679369 | orchestrator | + ceph df 2025-10-08 16:13:09.249683 | orchestrator | --- RAW STORAGE --- 2025-10-08 16:13:09.249778 | orchestrator | CLASS SIZE AVAIL USED RAW USED %RAW USED 2025-10-08 16:13:09.249805 | orchestrator | hdd 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.92 2025-10-08 16:13:09.249818 | orchestrator | TOTAL 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.92 2025-10-08 16:13:09.249829 | orchestrator | 2025-10-08 16:13:09.249841 | orchestrator | --- POOLS --- 2025-10-08 16:13:09.249864 | orchestrator | POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL 2025-10-08 16:13:09.249876 | orchestrator | .mgr 1 1 577 KiB 2 1.1 MiB 0 53 GiB 2025-10-08 16:13:09.249887 | orchestrator | cephfs_data 2 32 0 B 0 0 B 0 35 GiB 2025-10-08 16:13:09.249898 | orchestrator | cephfs_metadata 3 16 4.4 KiB 22 96 KiB 0 35 GiB 2025-10-08 16:13:09.249909 | orchestrator | default.rgw.buckets.data 4 32 0 B 0 0 B 0 35 GiB 2025-10-08 16:13:09.249920 | orchestrator | default.rgw.buckets.index 5 32 0 B 0 0 B 0 35 GiB 2025-10-08 16:13:09.249931 | orchestrator | default.rgw.control 6 32 0 B 8 0 B 0 35 GiB 2025-10-08 16:13:09.249941 | orchestrator | default.rgw.log 7 32 3.6 KiB 177 408 KiB 0 35 GiB 2025-10-08 16:13:09.249952 | orchestrator | default.rgw.meta 8 32 0 B 0 0 B 0 35 GiB 2025-10-08 16:13:09.249963 | orchestrator | .rgw.root 9 32 3.0 KiB 7 56 KiB 0 53 GiB 2025-10-08 16:13:09.249973 | orchestrator | backups 10 32 19 B 2 12 KiB 0 35 GiB 2025-10-08 16:13:09.249984 | orchestrator | volumes 11 32 19 B 2 12 KiB 0 35 GiB 2025-10-08 16:13:09.250105 | orchestrator | images 12 32 2.2 GiB 299 6.7 GiB 5.92 35 GiB 2025-10-08 16:13:09.250122 | orchestrator | metrics 13 32 19 B 2 12 KiB 0 35 GiB 2025-10-08 16:13:09.250134 | orchestrator | vms 14 32 19 B 2 12 KiB 0 35 GiB 2025-10-08 16:13:09.301562 | orchestrator | ++ semver latest 5.0.0 2025-10-08 16:13:09.372110 | orchestrator | + [[ -1 -eq -1 ]] 2025-10-08 16:13:09.372168 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-10-08 16:13:09.372189 | orchestrator | + [[ ! -e /etc/redhat-release ]] 2025-10-08 16:13:09.372209 | orchestrator | + osism apply facts 2025-10-08 16:13:11.472304 | orchestrator | 2025-10-08 16:13:11 | INFO  | Task fd44d214-1933-4e1b-b2b7-dc706922e573 (facts) was prepared for execution. 2025-10-08 16:13:11.472405 | orchestrator | 2025-10-08 16:13:11 | INFO  | It takes a moment until task fd44d214-1933-4e1b-b2b7-dc706922e573 (facts) has been started and output is visible here. 2025-10-08 16:13:25.854832 | orchestrator | 2025-10-08 16:13:25.854937 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-10-08 16:13:25.854952 | orchestrator | 2025-10-08 16:13:25.854963 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-10-08 16:13:25.854973 | orchestrator | Wednesday 08 October 2025 16:13:15 +0000 (0:00:00.267) 0:00:00.267 ***** 2025-10-08 16:13:25.854983 | orchestrator | ok: [testbed-manager] 2025-10-08 16:13:25.854994 | orchestrator | ok: [testbed-node-0] 2025-10-08 16:13:25.855048 | orchestrator | ok: [testbed-node-1] 2025-10-08 16:13:25.855059 | orchestrator | ok: [testbed-node-2] 2025-10-08 16:13:25.855068 | orchestrator | ok: [testbed-node-3] 2025-10-08 16:13:25.855078 | orchestrator | ok: [testbed-node-4] 2025-10-08 16:13:25.855088 | orchestrator | ok: [testbed-node-5] 2025-10-08 16:13:25.855097 | orchestrator | 2025-10-08 16:13:25.855107 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-10-08 16:13:25.855117 | orchestrator | Wednesday 08 October 2025 16:13:17 +0000 (0:00:01.516) 0:00:01.783 ***** 2025-10-08 16:13:25.855127 | orchestrator | skipping: [testbed-manager] 2025-10-08 16:13:25.855138 | orchestrator | skipping: [testbed-node-0] 2025-10-08 16:13:25.855147 | orchestrator | skipping: [testbed-node-1] 2025-10-08 16:13:25.855157 | orchestrator | skipping: [testbed-node-2] 2025-10-08 16:13:25.855167 | orchestrator | skipping: [testbed-node-3] 2025-10-08 16:13:25.855176 | orchestrator | skipping: [testbed-node-4] 2025-10-08 16:13:25.855186 | orchestrator | skipping: [testbed-node-5] 2025-10-08 16:13:25.855196 | orchestrator | 2025-10-08 16:13:25.855205 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-10-08 16:13:25.855215 | orchestrator | 2025-10-08 16:13:25.855225 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-10-08 16:13:25.855234 | orchestrator | Wednesday 08 October 2025 16:13:18 +0000 (0:00:01.305) 0:00:03.089 ***** 2025-10-08 16:13:25.855245 | orchestrator | ok: [testbed-node-0] 2025-10-08 16:13:25.855254 | orchestrator | ok: [testbed-node-2] 2025-10-08 16:13:25.855264 | orchestrator | ok: [testbed-node-1] 2025-10-08 16:13:25.855274 | orchestrator | ok: [testbed-manager] 2025-10-08 16:13:25.855283 | orchestrator | ok: [testbed-node-4] 2025-10-08 16:13:25.855293 | orchestrator | ok: [testbed-node-5] 2025-10-08 16:13:25.855303 | orchestrator | ok: [testbed-node-3] 2025-10-08 16:13:25.855312 | orchestrator | 2025-10-08 16:13:25.855322 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-10-08 16:13:25.855331 | orchestrator | 2025-10-08 16:13:25.855341 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-10-08 16:13:25.855351 | orchestrator | Wednesday 08 October 2025 16:13:24 +0000 (0:00:06.019) 0:00:09.109 ***** 2025-10-08 16:13:25.855361 | orchestrator | skipping: [testbed-manager] 2025-10-08 16:13:25.855370 | orchestrator | skipping: [testbed-node-0] 2025-10-08 16:13:25.855380 | orchestrator | skipping: [testbed-node-1] 2025-10-08 16:13:25.855389 | orchestrator | skipping: [testbed-node-2] 2025-10-08 16:13:25.855399 | orchestrator | skipping: [testbed-node-3] 2025-10-08 16:13:25.855408 | orchestrator | skipping: [testbed-node-4] 2025-10-08 16:13:25.855439 | orchestrator | skipping: [testbed-node-5] 2025-10-08 16:13:25.855449 | orchestrator | 2025-10-08 16:13:25.855459 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-08 16:13:25.855469 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-08 16:13:25.855480 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-08 16:13:25.855490 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-08 16:13:25.855499 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-08 16:13:25.855509 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-08 16:13:25.855519 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-08 16:13:25.855570 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-08 16:13:25.855583 | orchestrator | 2025-10-08 16:13:25.855593 | orchestrator | 2025-10-08 16:13:25.855602 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-08 16:13:25.855612 | orchestrator | Wednesday 08 October 2025 16:13:25 +0000 (0:00:00.694) 0:00:09.803 ***** 2025-10-08 16:13:25.855622 | orchestrator | =============================================================================== 2025-10-08 16:13:25.855632 | orchestrator | Gathers facts about hosts ----------------------------------------------- 6.02s 2025-10-08 16:13:25.855642 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.52s 2025-10-08 16:13:25.855651 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.31s 2025-10-08 16:13:25.855661 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.69s 2025-10-08 16:13:26.208430 | orchestrator | + osism validate ceph-mons 2025-10-08 16:13:58.535727 | orchestrator | 2025-10-08 16:13:58.680432 | orchestrator | PLAY [Ceph validate mons] ****************************************************** 2025-10-08 16:13:58.680509 | orchestrator | 2025-10-08 16:13:58.680524 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-10-08 16:13:58.680536 | orchestrator | Wednesday 08 October 2025 16:13:42 +0000 (0:00:00.478) 0:00:00.478 ***** 2025-10-08 16:13:58.680549 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-10-08 16:13:58.680560 | orchestrator | 2025-10-08 16:13:58.680571 | orchestrator | TASK [Create report output directory] ****************************************** 2025-10-08 16:13:58.680582 | orchestrator | Wednesday 08 October 2025 16:13:43 +0000 (0:00:00.886) 0:00:01.364 ***** 2025-10-08 16:13:58.680593 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-10-08 16:13:58.680604 | orchestrator | 2025-10-08 16:13:58.680615 | orchestrator | TASK [Define report vars] ****************************************************** 2025-10-08 16:13:58.680626 | orchestrator | Wednesday 08 October 2025 16:13:44 +0000 (0:00:00.965) 0:00:02.329 ***** 2025-10-08 16:13:58.680637 | orchestrator | ok: [testbed-node-0] 2025-10-08 16:13:58.680649 | orchestrator | 2025-10-08 16:13:58.680660 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2025-10-08 16:13:58.680671 | orchestrator | Wednesday 08 October 2025 16:13:44 +0000 (0:00:00.151) 0:00:02.481 ***** 2025-10-08 16:13:58.680682 | orchestrator | ok: [testbed-node-0] 2025-10-08 16:13:58.680693 | orchestrator | ok: [testbed-node-1] 2025-10-08 16:13:58.680704 | orchestrator | ok: [testbed-node-2] 2025-10-08 16:13:58.680715 | orchestrator | 2025-10-08 16:13:58.680726 | orchestrator | TASK [Get container info] ****************************************************** 2025-10-08 16:13:58.680737 | orchestrator | Wednesday 08 October 2025 16:13:45 +0000 (0:00:00.295) 0:00:02.776 ***** 2025-10-08 16:13:58.680770 | orchestrator | ok: [testbed-node-2] 2025-10-08 16:13:58.680782 | orchestrator | ok: [testbed-node-1] 2025-10-08 16:13:58.680792 | orchestrator | ok: [testbed-node-0] 2025-10-08 16:13:58.680803 | orchestrator | 2025-10-08 16:13:58.680814 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2025-10-08 16:13:58.680825 | orchestrator | Wednesday 08 October 2025 16:13:46 +0000 (0:00:01.001) 0:00:03.778 ***** 2025-10-08 16:13:58.680836 | orchestrator | skipping: [testbed-node-0] 2025-10-08 16:13:58.680847 | orchestrator | skipping: [testbed-node-1] 2025-10-08 16:13:58.680858 | orchestrator | skipping: [testbed-node-2] 2025-10-08 16:13:58.680869 | orchestrator | 2025-10-08 16:13:58.680879 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2025-10-08 16:13:58.680890 | orchestrator | Wednesday 08 October 2025 16:13:46 +0000 (0:00:00.303) 0:00:04.082 ***** 2025-10-08 16:13:58.680901 | orchestrator | ok: [testbed-node-0] 2025-10-08 16:13:58.680912 | orchestrator | ok: [testbed-node-1] 2025-10-08 16:13:58.680922 | orchestrator | ok: [testbed-node-2] 2025-10-08 16:13:58.680933 | orchestrator | 2025-10-08 16:13:58.680944 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-10-08 16:13:58.680981 | orchestrator | Wednesday 08 October 2025 16:13:47 +0000 (0:00:00.488) 0:00:04.570 ***** 2025-10-08 16:13:58.680993 | orchestrator | ok: [testbed-node-0] 2025-10-08 16:13:58.681004 | orchestrator | ok: [testbed-node-1] 2025-10-08 16:13:58.681015 | orchestrator | ok: [testbed-node-2] 2025-10-08 16:13:58.681026 | orchestrator | 2025-10-08 16:13:58.681037 | orchestrator | TASK [Set test result to failed if ceph-mon is not running] ******************** 2025-10-08 16:13:58.681049 | orchestrator | Wednesday 08 October 2025 16:13:47 +0000 (0:00:00.313) 0:00:04.884 ***** 2025-10-08 16:13:58.681060 | orchestrator | skipping: [testbed-node-0] 2025-10-08 16:13:58.681071 | orchestrator | skipping: [testbed-node-1] 2025-10-08 16:13:58.681082 | orchestrator | skipping: [testbed-node-2] 2025-10-08 16:13:58.681093 | orchestrator | 2025-10-08 16:13:58.681104 | orchestrator | TASK [Set test result to passed if ceph-mon is running] ************************ 2025-10-08 16:13:58.681115 | orchestrator | Wednesday 08 October 2025 16:13:47 +0000 (0:00:00.312) 0:00:05.196 ***** 2025-10-08 16:13:58.681126 | orchestrator | ok: [testbed-node-0] 2025-10-08 16:13:58.681137 | orchestrator | ok: [testbed-node-1] 2025-10-08 16:13:58.681147 | orchestrator | ok: [testbed-node-2] 2025-10-08 16:13:58.681158 | orchestrator | 2025-10-08 16:13:58.681169 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-10-08 16:13:58.681180 | orchestrator | Wednesday 08 October 2025 16:13:48 +0000 (0:00:00.504) 0:00:05.700 ***** 2025-10-08 16:13:58.681191 | orchestrator | skipping: [testbed-node-0] 2025-10-08 16:13:58.681202 | orchestrator | 2025-10-08 16:13:58.681213 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-10-08 16:13:58.681224 | orchestrator | Wednesday 08 October 2025 16:13:48 +0000 (0:00:00.251) 0:00:05.952 ***** 2025-10-08 16:13:58.681241 | orchestrator | skipping: [testbed-node-0] 2025-10-08 16:13:58.681252 | orchestrator | 2025-10-08 16:13:58.681263 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-10-08 16:13:58.681274 | orchestrator | Wednesday 08 October 2025 16:13:48 +0000 (0:00:00.265) 0:00:06.218 ***** 2025-10-08 16:13:58.681285 | orchestrator | skipping: [testbed-node-0] 2025-10-08 16:13:58.681296 | orchestrator | 2025-10-08 16:13:58.681306 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-10-08 16:13:58.681317 | orchestrator | Wednesday 08 October 2025 16:13:48 +0000 (0:00:00.256) 0:00:06.475 ***** 2025-10-08 16:13:58.681328 | orchestrator | 2025-10-08 16:13:58.681339 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-10-08 16:13:58.681350 | orchestrator | Wednesday 08 October 2025 16:13:49 +0000 (0:00:00.087) 0:00:06.563 ***** 2025-10-08 16:13:58.681361 | orchestrator | 2025-10-08 16:13:58.681372 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-10-08 16:13:58.681383 | orchestrator | Wednesday 08 October 2025 16:13:49 +0000 (0:00:00.086) 0:00:06.649 ***** 2025-10-08 16:13:58.681405 | orchestrator | 2025-10-08 16:13:58.681417 | orchestrator | TASK [Print report file information] ******************************************* 2025-10-08 16:13:58.681428 | orchestrator | Wednesday 08 October 2025 16:13:49 +0000 (0:00:00.078) 0:00:06.727 ***** 2025-10-08 16:13:58.681439 | orchestrator | skipping: [testbed-node-0] 2025-10-08 16:13:58.681450 | orchestrator | 2025-10-08 16:13:58.681461 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2025-10-08 16:13:58.681472 | orchestrator | Wednesday 08 October 2025 16:13:49 +0000 (0:00:00.245) 0:00:06.972 ***** 2025-10-08 16:13:58.681484 | orchestrator | skipping: [testbed-node-0] 2025-10-08 16:13:58.681495 | orchestrator | 2025-10-08 16:13:58.681535 | orchestrator | TASK [Prepare quorum test vars] ************************************************ 2025-10-08 16:13:58.681547 | orchestrator | Wednesday 08 October 2025 16:13:49 +0000 (0:00:00.272) 0:00:07.245 ***** 2025-10-08 16:13:58.681558 | orchestrator | ok: [testbed-node-0] 2025-10-08 16:13:58.681569 | orchestrator | 2025-10-08 16:13:58.681580 | orchestrator | TASK [Get monmap info from one mon container] ********************************** 2025-10-08 16:13:58.681591 | orchestrator | Wednesday 08 October 2025 16:13:49 +0000 (0:00:00.121) 0:00:07.367 ***** 2025-10-08 16:13:58.681602 | orchestrator | changed: [testbed-node-0] 2025-10-08 16:13:58.681613 | orchestrator | 2025-10-08 16:13:58.681623 | orchestrator | TASK [Set quorum test data] **************************************************** 2025-10-08 16:13:58.681634 | orchestrator | Wednesday 08 October 2025 16:13:51 +0000 (0:00:01.626) 0:00:08.994 ***** 2025-10-08 16:13:58.681645 | orchestrator | ok: [testbed-node-0] 2025-10-08 16:13:58.681656 | orchestrator | 2025-10-08 16:13:58.681667 | orchestrator | TASK [Fail quorum test if not all monitors are in quorum] ********************** 2025-10-08 16:13:58.681678 | orchestrator | Wednesday 08 October 2025 16:13:52 +0000 (0:00:00.496) 0:00:09.491 ***** 2025-10-08 16:13:58.681689 | orchestrator | skipping: [testbed-node-0] 2025-10-08 16:13:58.681699 | orchestrator | 2025-10-08 16:13:58.681710 | orchestrator | TASK [Pass quorum test if all monitors are in quorum] ************************** 2025-10-08 16:13:58.681721 | orchestrator | Wednesday 08 October 2025 16:13:52 +0000 (0:00:00.116) 0:00:09.607 ***** 2025-10-08 16:13:58.681732 | orchestrator | ok: [testbed-node-0] 2025-10-08 16:13:58.681742 | orchestrator | 2025-10-08 16:13:58.681753 | orchestrator | TASK [Set fsid test vars] ****************************************************** 2025-10-08 16:13:58.681764 | orchestrator | Wednesday 08 October 2025 16:13:52 +0000 (0:00:00.333) 0:00:09.940 ***** 2025-10-08 16:13:58.681775 | orchestrator | ok: [testbed-node-0] 2025-10-08 16:13:58.681785 | orchestrator | 2025-10-08 16:13:58.681796 | orchestrator | TASK [Fail Cluster FSID test if FSID does not match configuration] ************* 2025-10-08 16:13:58.681808 | orchestrator | Wednesday 08 October 2025 16:13:52 +0000 (0:00:00.289) 0:00:10.230 ***** 2025-10-08 16:13:58.681818 | orchestrator | skipping: [testbed-node-0] 2025-10-08 16:13:58.681829 | orchestrator | 2025-10-08 16:13:58.681840 | orchestrator | TASK [Pass Cluster FSID test if it matches configuration] ********************** 2025-10-08 16:13:58.681851 | orchestrator | Wednesday 08 October 2025 16:13:52 +0000 (0:00:00.104) 0:00:10.335 ***** 2025-10-08 16:13:58.681862 | orchestrator | ok: [testbed-node-0] 2025-10-08 16:13:58.681872 | orchestrator | 2025-10-08 16:13:58.681883 | orchestrator | TASK [Prepare status test vars] ************************************************ 2025-10-08 16:13:58.681894 | orchestrator | Wednesday 08 October 2025 16:13:53 +0000 (0:00:00.166) 0:00:10.501 ***** 2025-10-08 16:13:58.681905 | orchestrator | ok: [testbed-node-0] 2025-10-08 16:13:58.681916 | orchestrator | 2025-10-08 16:13:58.681926 | orchestrator | TASK [Gather status data] ****************************************************** 2025-10-08 16:13:58.681937 | orchestrator | Wednesday 08 October 2025 16:13:53 +0000 (0:00:00.126) 0:00:10.627 ***** 2025-10-08 16:13:58.681948 | orchestrator | changed: [testbed-node-0] 2025-10-08 16:13:58.681977 | orchestrator | 2025-10-08 16:13:58.681989 | orchestrator | TASK [Set health test data] **************************************************** 2025-10-08 16:13:58.682000 | orchestrator | Wednesday 08 October 2025 16:13:54 +0000 (0:00:01.326) 0:00:11.954 ***** 2025-10-08 16:13:58.682065 | orchestrator | ok: [testbed-node-0] 2025-10-08 16:13:58.682079 | orchestrator | 2025-10-08 16:13:58.682090 | orchestrator | TASK [Fail cluster-health if health is not acceptable] ************************* 2025-10-08 16:13:58.682101 | orchestrator | Wednesday 08 October 2025 16:13:54 +0000 (0:00:00.298) 0:00:12.252 ***** 2025-10-08 16:13:58.682112 | orchestrator | skipping: [testbed-node-0] 2025-10-08 16:13:58.682123 | orchestrator | 2025-10-08 16:13:58.682134 | orchestrator | TASK [Pass cluster-health if health is acceptable] ***************************** 2025-10-08 16:13:58.682145 | orchestrator | Wednesday 08 October 2025 16:13:54 +0000 (0:00:00.136) 0:00:12.389 ***** 2025-10-08 16:13:58.682156 | orchestrator | ok: [testbed-node-0] 2025-10-08 16:13:58.682167 | orchestrator | 2025-10-08 16:13:58.682178 | orchestrator | TASK [Fail cluster-health if health is not acceptable (strict)] **************** 2025-10-08 16:13:58.682188 | orchestrator | Wednesday 08 October 2025 16:13:55 +0000 (0:00:00.143) 0:00:12.532 ***** 2025-10-08 16:13:58.682199 | orchestrator | skipping: [testbed-node-0] 2025-10-08 16:13:58.682210 | orchestrator | 2025-10-08 16:13:58.682221 | orchestrator | TASK [Pass cluster-health if status is OK (strict)] **************************** 2025-10-08 16:13:58.682232 | orchestrator | Wednesday 08 October 2025 16:13:55 +0000 (0:00:00.148) 0:00:12.680 ***** 2025-10-08 16:13:58.682243 | orchestrator | skipping: [testbed-node-0] 2025-10-08 16:13:58.682253 | orchestrator | 2025-10-08 16:13:58.682264 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-10-08 16:13:58.682276 | orchestrator | Wednesday 08 October 2025 16:13:55 +0000 (0:00:00.328) 0:00:13.009 ***** 2025-10-08 16:13:58.682287 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-10-08 16:13:58.682298 | orchestrator | 2025-10-08 16:13:58.682309 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-10-08 16:13:58.682320 | orchestrator | Wednesday 08 October 2025 16:13:55 +0000 (0:00:00.262) 0:00:13.272 ***** 2025-10-08 16:13:58.682330 | orchestrator | skipping: [testbed-node-0] 2025-10-08 16:13:58.682341 | orchestrator | 2025-10-08 16:13:58.682352 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-10-08 16:13:58.682363 | orchestrator | Wednesday 08 October 2025 16:13:56 +0000 (0:00:00.272) 0:00:13.544 ***** 2025-10-08 16:13:58.682374 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-10-08 16:13:58.682385 | orchestrator | 2025-10-08 16:13:58.682396 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-10-08 16:13:58.682407 | orchestrator | Wednesday 08 October 2025 16:13:57 +0000 (0:00:01.701) 0:00:15.246 ***** 2025-10-08 16:13:58.682417 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-10-08 16:13:58.682428 | orchestrator | 2025-10-08 16:13:58.682439 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-10-08 16:13:58.682450 | orchestrator | Wednesday 08 October 2025 16:13:58 +0000 (0:00:00.282) 0:00:15.528 ***** 2025-10-08 16:13:58.682461 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-10-08 16:13:58.682472 | orchestrator | 2025-10-08 16:13:58.682491 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-10-08 16:14:01.268477 | orchestrator | Wednesday 08 October 2025 16:13:58 +0000 (0:00:00.259) 0:00:15.788 ***** 2025-10-08 16:14:01.268579 | orchestrator | 2025-10-08 16:14:01.268597 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-10-08 16:14:01.268609 | orchestrator | Wednesday 08 October 2025 16:13:58 +0000 (0:00:00.069) 0:00:15.857 ***** 2025-10-08 16:14:01.268621 | orchestrator | 2025-10-08 16:14:01.268632 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-10-08 16:14:01.268643 | orchestrator | Wednesday 08 October 2025 16:13:58 +0000 (0:00:00.070) 0:00:15.928 ***** 2025-10-08 16:14:01.268653 | orchestrator | 2025-10-08 16:14:01.268664 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-10-08 16:14:01.268675 | orchestrator | Wednesday 08 October 2025 16:13:58 +0000 (0:00:00.076) 0:00:16.005 ***** 2025-10-08 16:14:01.268687 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-10-08 16:14:01.268725 | orchestrator | 2025-10-08 16:14:01.268736 | orchestrator | TASK [Print report file information] ******************************************* 2025-10-08 16:14:01.268747 | orchestrator | Wednesday 08 October 2025 16:14:00 +0000 (0:00:01.507) 0:00:17.512 ***** 2025-10-08 16:14:01.268758 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2025-10-08 16:14:01.268769 | orchestrator |  "msg": [ 2025-10-08 16:14:01.268782 | orchestrator |  "Validator run completed.", 2025-10-08 16:14:01.268793 | orchestrator |  "You can find the report file here:", 2025-10-08 16:14:01.268805 | orchestrator |  "/opt/reports/validator/ceph-mons-validator-2025-10-08T16:13:43+00:00-report.json", 2025-10-08 16:14:01.268816 | orchestrator |  "on the following host:", 2025-10-08 16:14:01.268828 | orchestrator |  "testbed-manager" 2025-10-08 16:14:01.268839 | orchestrator |  ] 2025-10-08 16:14:01.268850 | orchestrator | } 2025-10-08 16:14:01.268861 | orchestrator | 2025-10-08 16:14:01.268872 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-08 16:14:01.268902 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-10-08 16:14:01.268915 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-08 16:14:01.268926 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-08 16:14:01.268937 | orchestrator | 2025-10-08 16:14:01.268949 | orchestrator | 2025-10-08 16:14:01.269011 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-08 16:14:01.269024 | orchestrator | Wednesday 08 October 2025 16:14:00 +0000 (0:00:00.887) 0:00:18.400 ***** 2025-10-08 16:14:01.269037 | orchestrator | =============================================================================== 2025-10-08 16:14:01.269049 | orchestrator | Aggregate test results step one ----------------------------------------- 1.70s 2025-10-08 16:14:01.269062 | orchestrator | Get monmap info from one mon container ---------------------------------- 1.63s 2025-10-08 16:14:01.269074 | orchestrator | Write report file ------------------------------------------------------- 1.51s 2025-10-08 16:14:01.269086 | orchestrator | Gather status data ------------------------------------------------------ 1.33s 2025-10-08 16:14:01.269099 | orchestrator | Get container info ------------------------------------------------------ 1.00s 2025-10-08 16:14:01.269111 | orchestrator | Create report output directory ------------------------------------------ 0.97s 2025-10-08 16:14:01.269123 | orchestrator | Print report file information ------------------------------------------- 0.89s 2025-10-08 16:14:01.269136 | orchestrator | Get timestamp for report file ------------------------------------------- 0.89s 2025-10-08 16:14:01.269149 | orchestrator | Set test result to passed if ceph-mon is running ------------------------ 0.50s 2025-10-08 16:14:01.269161 | orchestrator | Set quorum test data ---------------------------------------------------- 0.50s 2025-10-08 16:14:01.269174 | orchestrator | Set test result to passed if container is existing ---------------------- 0.49s 2025-10-08 16:14:01.269191 | orchestrator | Pass quorum test if all monitors are in quorum -------------------------- 0.33s 2025-10-08 16:14:01.269205 | orchestrator | Pass cluster-health if status is OK (strict) ---------------------------- 0.33s 2025-10-08 16:14:01.269217 | orchestrator | Prepare test data ------------------------------------------------------- 0.31s 2025-10-08 16:14:01.269230 | orchestrator | Set test result to failed if ceph-mon is not running -------------------- 0.31s 2025-10-08 16:14:01.269242 | orchestrator | Set test result to failed if container is missing ----------------------- 0.30s 2025-10-08 16:14:01.269255 | orchestrator | Set health test data ---------------------------------------------------- 0.30s 2025-10-08 16:14:01.269268 | orchestrator | Prepare test data for container existance test -------------------------- 0.30s 2025-10-08 16:14:01.269280 | orchestrator | Set fsid test vars ------------------------------------------------------ 0.29s 2025-10-08 16:14:01.269300 | orchestrator | Aggregate test results step two ----------------------------------------- 0.28s 2025-10-08 16:14:01.608365 | orchestrator | + osism validate ceph-mgrs 2025-10-08 16:14:32.952591 | orchestrator | 2025-10-08 16:14:32.952739 | orchestrator | PLAY [Ceph validate mgrs] ****************************************************** 2025-10-08 16:14:32.952766 | orchestrator | 2025-10-08 16:14:32.952786 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-10-08 16:14:32.952805 | orchestrator | Wednesday 08 October 2025 16:14:18 +0000 (0:00:00.435) 0:00:00.435 ***** 2025-10-08 16:14:32.952823 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-10-08 16:14:32.952841 | orchestrator | 2025-10-08 16:14:32.952859 | orchestrator | TASK [Create report output directory] ****************************************** 2025-10-08 16:14:32.952878 | orchestrator | Wednesday 08 October 2025 16:14:19 +0000 (0:00:00.850) 0:00:01.286 ***** 2025-10-08 16:14:32.952896 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-10-08 16:14:32.952985 | orchestrator | 2025-10-08 16:14:32.953006 | orchestrator | TASK [Define report vars] ****************************************************** 2025-10-08 16:14:32.953026 | orchestrator | Wednesday 08 October 2025 16:14:20 +0000 (0:00:01.006) 0:00:02.292 ***** 2025-10-08 16:14:32.953045 | orchestrator | ok: [testbed-node-0] 2025-10-08 16:14:32.953064 | orchestrator | 2025-10-08 16:14:32.953082 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2025-10-08 16:14:32.953100 | orchestrator | Wednesday 08 October 2025 16:14:20 +0000 (0:00:00.133) 0:00:02.426 ***** 2025-10-08 16:14:32.953117 | orchestrator | ok: [testbed-node-0] 2025-10-08 16:14:32.953134 | orchestrator | ok: [testbed-node-1] 2025-10-08 16:14:32.953152 | orchestrator | ok: [testbed-node-2] 2025-10-08 16:14:32.953169 | orchestrator | 2025-10-08 16:14:32.953187 | orchestrator | TASK [Get container info] ****************************************************** 2025-10-08 16:14:32.953205 | orchestrator | Wednesday 08 October 2025 16:14:20 +0000 (0:00:00.322) 0:00:02.748 ***** 2025-10-08 16:14:32.953224 | orchestrator | ok: [testbed-node-1] 2025-10-08 16:14:32.953242 | orchestrator | ok: [testbed-node-2] 2025-10-08 16:14:32.953263 | orchestrator | ok: [testbed-node-0] 2025-10-08 16:14:32.953282 | orchestrator | 2025-10-08 16:14:32.953301 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2025-10-08 16:14:32.953321 | orchestrator | Wednesday 08 October 2025 16:14:21 +0000 (0:00:01.023) 0:00:03.772 ***** 2025-10-08 16:14:32.953340 | orchestrator | skipping: [testbed-node-0] 2025-10-08 16:14:32.953359 | orchestrator | skipping: [testbed-node-1] 2025-10-08 16:14:32.953375 | orchestrator | skipping: [testbed-node-2] 2025-10-08 16:14:32.953394 | orchestrator | 2025-10-08 16:14:32.953413 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2025-10-08 16:14:32.953432 | orchestrator | Wednesday 08 October 2025 16:14:21 +0000 (0:00:00.311) 0:00:04.083 ***** 2025-10-08 16:14:32.953450 | orchestrator | ok: [testbed-node-0] 2025-10-08 16:14:32.953467 | orchestrator | ok: [testbed-node-1] 2025-10-08 16:14:32.953486 | orchestrator | ok: [testbed-node-2] 2025-10-08 16:14:32.953503 | orchestrator | 2025-10-08 16:14:32.953521 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-10-08 16:14:32.953539 | orchestrator | Wednesday 08 October 2025 16:14:22 +0000 (0:00:00.503) 0:00:04.586 ***** 2025-10-08 16:14:32.953558 | orchestrator | ok: [testbed-node-0] 2025-10-08 16:14:32.953577 | orchestrator | ok: [testbed-node-1] 2025-10-08 16:14:32.953595 | orchestrator | ok: [testbed-node-2] 2025-10-08 16:14:32.953613 | orchestrator | 2025-10-08 16:14:32.953631 | orchestrator | TASK [Set test result to failed if ceph-mgr is not running] ******************** 2025-10-08 16:14:32.953650 | orchestrator | Wednesday 08 October 2025 16:14:22 +0000 (0:00:00.318) 0:00:04.905 ***** 2025-10-08 16:14:32.953669 | orchestrator | skipping: [testbed-node-0] 2025-10-08 16:14:32.953686 | orchestrator | skipping: [testbed-node-1] 2025-10-08 16:14:32.953705 | orchestrator | skipping: [testbed-node-2] 2025-10-08 16:14:32.953724 | orchestrator | 2025-10-08 16:14:32.953742 | orchestrator | TASK [Set test result to passed if ceph-mgr is running] ************************ 2025-10-08 16:14:32.953804 | orchestrator | Wednesday 08 October 2025 16:14:23 +0000 (0:00:00.334) 0:00:05.239 ***** 2025-10-08 16:14:32.953826 | orchestrator | ok: [testbed-node-0] 2025-10-08 16:14:32.953843 | orchestrator | ok: [testbed-node-1] 2025-10-08 16:14:32.953862 | orchestrator | ok: [testbed-node-2] 2025-10-08 16:14:32.953885 | orchestrator | 2025-10-08 16:14:32.953952 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-10-08 16:14:32.953973 | orchestrator | Wednesday 08 October 2025 16:14:23 +0000 (0:00:00.479) 0:00:05.719 ***** 2025-10-08 16:14:32.953990 | orchestrator | skipping: [testbed-node-0] 2025-10-08 16:14:32.954008 | orchestrator | 2025-10-08 16:14:32.954108 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-10-08 16:14:32.954130 | orchestrator | Wednesday 08 October 2025 16:14:23 +0000 (0:00:00.270) 0:00:05.990 ***** 2025-10-08 16:14:32.954150 | orchestrator | skipping: [testbed-node-0] 2025-10-08 16:14:32.954213 | orchestrator | 2025-10-08 16:14:32.954233 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-10-08 16:14:32.954250 | orchestrator | Wednesday 08 October 2025 16:14:24 +0000 (0:00:00.294) 0:00:06.284 ***** 2025-10-08 16:14:32.954268 | orchestrator | skipping: [testbed-node-0] 2025-10-08 16:14:32.954286 | orchestrator | 2025-10-08 16:14:32.954306 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-10-08 16:14:32.954325 | orchestrator | Wednesday 08 October 2025 16:14:24 +0000 (0:00:00.282) 0:00:06.567 ***** 2025-10-08 16:14:32.954343 | orchestrator | 2025-10-08 16:14:32.954383 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-10-08 16:14:32.954403 | orchestrator | Wednesday 08 October 2025 16:14:24 +0000 (0:00:00.069) 0:00:06.636 ***** 2025-10-08 16:14:32.954420 | orchestrator | 2025-10-08 16:14:32.954438 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-10-08 16:14:32.954455 | orchestrator | Wednesday 08 October 2025 16:14:24 +0000 (0:00:00.069) 0:00:06.705 ***** 2025-10-08 16:14:32.954474 | orchestrator | 2025-10-08 16:14:32.954492 | orchestrator | TASK [Print report file information] ******************************************* 2025-10-08 16:14:32.954511 | orchestrator | Wednesday 08 October 2025 16:14:24 +0000 (0:00:00.083) 0:00:06.789 ***** 2025-10-08 16:14:32.954529 | orchestrator | skipping: [testbed-node-0] 2025-10-08 16:14:32.954547 | orchestrator | 2025-10-08 16:14:32.954565 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2025-10-08 16:14:32.954583 | orchestrator | Wednesday 08 October 2025 16:14:24 +0000 (0:00:00.244) 0:00:07.033 ***** 2025-10-08 16:14:32.954601 | orchestrator | skipping: [testbed-node-0] 2025-10-08 16:14:32.954619 | orchestrator | 2025-10-08 16:14:32.954665 | orchestrator | TASK [Define mgr module test vars] ********************************************* 2025-10-08 16:14:32.954684 | orchestrator | Wednesday 08 October 2025 16:14:25 +0000 (0:00:00.243) 0:00:07.276 ***** 2025-10-08 16:14:32.954702 | orchestrator | ok: [testbed-node-0] 2025-10-08 16:14:32.954720 | orchestrator | 2025-10-08 16:14:32.954738 | orchestrator | TASK [Gather list of mgr modules] ********************************************** 2025-10-08 16:14:32.954756 | orchestrator | Wednesday 08 October 2025 16:14:25 +0000 (0:00:00.121) 0:00:07.398 ***** 2025-10-08 16:14:32.954774 | orchestrator | changed: [testbed-node-0] 2025-10-08 16:14:32.954792 | orchestrator | 2025-10-08 16:14:32.954810 | orchestrator | TASK [Parse mgr module list from json] ***************************************** 2025-10-08 16:14:32.954828 | orchestrator | Wednesday 08 October 2025 16:14:27 +0000 (0:00:02.141) 0:00:09.539 ***** 2025-10-08 16:14:32.954846 | orchestrator | ok: [testbed-node-0] 2025-10-08 16:14:32.954864 | orchestrator | 2025-10-08 16:14:32.954882 | orchestrator | TASK [Extract list of enabled mgr modules] ************************************* 2025-10-08 16:14:32.954900 | orchestrator | Wednesday 08 October 2025 16:14:27 +0000 (0:00:00.431) 0:00:09.970 ***** 2025-10-08 16:14:32.954948 | orchestrator | ok: [testbed-node-0] 2025-10-08 16:14:32.954967 | orchestrator | 2025-10-08 16:14:32.954984 | orchestrator | TASK [Fail test if mgr modules are disabled that should be enabled] ************ 2025-10-08 16:14:32.955003 | orchestrator | Wednesday 08 October 2025 16:14:28 +0000 (0:00:00.317) 0:00:10.288 ***** 2025-10-08 16:14:32.955039 | orchestrator | skipping: [testbed-node-0] 2025-10-08 16:14:32.955056 | orchestrator | 2025-10-08 16:14:32.955072 | orchestrator | TASK [Pass test if required mgr modules are enabled] *************************** 2025-10-08 16:14:32.955088 | orchestrator | Wednesday 08 October 2025 16:14:28 +0000 (0:00:00.132) 0:00:10.420 ***** 2025-10-08 16:14:32.955104 | orchestrator | ok: [testbed-node-0] 2025-10-08 16:14:32.955121 | orchestrator | 2025-10-08 16:14:32.955137 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-10-08 16:14:32.955153 | orchestrator | Wednesday 08 October 2025 16:14:28 +0000 (0:00:00.167) 0:00:10.587 ***** 2025-10-08 16:14:32.955169 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-10-08 16:14:32.955186 | orchestrator | 2025-10-08 16:14:32.955202 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-10-08 16:14:32.955218 | orchestrator | Wednesday 08 October 2025 16:14:28 +0000 (0:00:00.281) 0:00:10.868 ***** 2025-10-08 16:14:32.955233 | orchestrator | skipping: [testbed-node-0] 2025-10-08 16:14:32.955250 | orchestrator | 2025-10-08 16:14:32.955265 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-10-08 16:14:32.955280 | orchestrator | Wednesday 08 October 2025 16:14:28 +0000 (0:00:00.271) 0:00:11.139 ***** 2025-10-08 16:14:32.955296 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-10-08 16:14:32.955312 | orchestrator | 2025-10-08 16:14:32.955328 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-10-08 16:14:32.955344 | orchestrator | Wednesday 08 October 2025 16:14:30 +0000 (0:00:01.278) 0:00:12.418 ***** 2025-10-08 16:14:32.955360 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-10-08 16:14:32.955377 | orchestrator | 2025-10-08 16:14:32.955393 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-10-08 16:14:32.955409 | orchestrator | Wednesday 08 October 2025 16:14:30 +0000 (0:00:00.261) 0:00:12.680 ***** 2025-10-08 16:14:32.955426 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-10-08 16:14:32.955442 | orchestrator | 2025-10-08 16:14:32.955458 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-10-08 16:14:32.955474 | orchestrator | Wednesday 08 October 2025 16:14:30 +0000 (0:00:00.261) 0:00:12.942 ***** 2025-10-08 16:14:32.955490 | orchestrator | 2025-10-08 16:14:32.955507 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-10-08 16:14:32.955523 | orchestrator | Wednesday 08 October 2025 16:14:30 +0000 (0:00:00.069) 0:00:13.011 ***** 2025-10-08 16:14:32.955539 | orchestrator | 2025-10-08 16:14:32.955555 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-10-08 16:14:32.955570 | orchestrator | Wednesday 08 October 2025 16:14:30 +0000 (0:00:00.068) 0:00:13.080 ***** 2025-10-08 16:14:32.955586 | orchestrator | 2025-10-08 16:14:32.955601 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-10-08 16:14:32.955617 | orchestrator | Wednesday 08 October 2025 16:14:31 +0000 (0:00:00.270) 0:00:13.351 ***** 2025-10-08 16:14:32.955633 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-10-08 16:14:32.955648 | orchestrator | 2025-10-08 16:14:32.955664 | orchestrator | TASK [Print report file information] ******************************************* 2025-10-08 16:14:32.955679 | orchestrator | Wednesday 08 October 2025 16:14:32 +0000 (0:00:01.355) 0:00:14.706 ***** 2025-10-08 16:14:32.955695 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2025-10-08 16:14:32.955711 | orchestrator |  "msg": [ 2025-10-08 16:14:32.955727 | orchestrator |  "Validator run completed.", 2025-10-08 16:14:32.955743 | orchestrator |  "You can find the report file here:", 2025-10-08 16:14:32.955759 | orchestrator |  "/opt/reports/validator/ceph-mgrs-validator-2025-10-08T16:14:18+00:00-report.json", 2025-10-08 16:14:32.955777 | orchestrator |  "on the following host:", 2025-10-08 16:14:32.955792 | orchestrator |  "testbed-manager" 2025-10-08 16:14:32.955809 | orchestrator |  ] 2025-10-08 16:14:32.955826 | orchestrator | } 2025-10-08 16:14:32.955853 | orchestrator | 2025-10-08 16:14:32.955870 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-08 16:14:32.955888 | orchestrator | testbed-node-0 : ok=19  changed=3  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-10-08 16:14:32.955906 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-08 16:14:32.955987 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-08 16:14:33.308044 | orchestrator | 2025-10-08 16:14:33.308139 | orchestrator | 2025-10-08 16:14:33.308154 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-08 16:14:33.308168 | orchestrator | Wednesday 08 October 2025 16:14:32 +0000 (0:00:00.416) 0:00:15.123 ***** 2025-10-08 16:14:33.308179 | orchestrator | =============================================================================== 2025-10-08 16:14:33.308209 | orchestrator | Gather list of mgr modules ---------------------------------------------- 2.14s 2025-10-08 16:14:33.308221 | orchestrator | Write report file ------------------------------------------------------- 1.36s 2025-10-08 16:14:33.308232 | orchestrator | Aggregate test results step one ----------------------------------------- 1.28s 2025-10-08 16:14:33.308243 | orchestrator | Get container info ------------------------------------------------------ 1.02s 2025-10-08 16:14:33.308254 | orchestrator | Create report output directory ------------------------------------------ 1.01s 2025-10-08 16:14:33.308265 | orchestrator | Get timestamp for report file ------------------------------------------- 0.85s 2025-10-08 16:14:33.308276 | orchestrator | Set test result to passed if container is existing ---------------------- 0.50s 2025-10-08 16:14:33.308287 | orchestrator | Set test result to passed if ceph-mgr is running ------------------------ 0.48s 2025-10-08 16:14:33.308298 | orchestrator | Parse mgr module list from json ----------------------------------------- 0.43s 2025-10-08 16:14:33.308308 | orchestrator | Print report file information ------------------------------------------- 0.42s 2025-10-08 16:14:33.308319 | orchestrator | Flush handlers ---------------------------------------------------------- 0.41s 2025-10-08 16:14:33.308330 | orchestrator | Set test result to failed if ceph-mgr is not running -------------------- 0.33s 2025-10-08 16:14:33.308341 | orchestrator | Prepare test data for container existance test -------------------------- 0.32s 2025-10-08 16:14:33.308352 | orchestrator | Prepare test data ------------------------------------------------------- 0.32s 2025-10-08 16:14:33.308362 | orchestrator | Extract list of enabled mgr modules ------------------------------------- 0.32s 2025-10-08 16:14:33.308373 | orchestrator | Set test result to failed if container is missing ----------------------- 0.31s 2025-10-08 16:14:33.308384 | orchestrator | Aggregate test results step two ----------------------------------------- 0.29s 2025-10-08 16:14:33.308395 | orchestrator | Aggregate test results step three --------------------------------------- 0.28s 2025-10-08 16:14:33.308406 | orchestrator | Set validation result to passed if no test failed ----------------------- 0.28s 2025-10-08 16:14:33.308416 | orchestrator | Set validation result to failed if a test failed ------------------------ 0.27s 2025-10-08 16:14:33.656001 | orchestrator | + osism validate ceph-osds 2025-10-08 16:14:54.897290 | orchestrator | 2025-10-08 16:14:54.897398 | orchestrator | PLAY [Ceph validate OSDs] ****************************************************** 2025-10-08 16:14:54.897416 | orchestrator | 2025-10-08 16:14:54.897428 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-10-08 16:14:54.897440 | orchestrator | Wednesday 08 October 2025 16:14:50 +0000 (0:00:00.416) 0:00:00.416 ***** 2025-10-08 16:14:54.897452 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-10-08 16:14:54.897463 | orchestrator | 2025-10-08 16:14:54.897474 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-10-08 16:14:54.897486 | orchestrator | Wednesday 08 October 2025 16:14:51 +0000 (0:00:00.695) 0:00:01.112 ***** 2025-10-08 16:14:54.897497 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-10-08 16:14:54.897535 | orchestrator | 2025-10-08 16:14:54.897547 | orchestrator | TASK [Create report output directory] ****************************************** 2025-10-08 16:14:54.897557 | orchestrator | Wednesday 08 October 2025 16:14:51 +0000 (0:00:00.443) 0:00:01.555 ***** 2025-10-08 16:14:54.897568 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-10-08 16:14:54.897579 | orchestrator | 2025-10-08 16:14:54.897590 | orchestrator | TASK [Define report vars] ****************************************************** 2025-10-08 16:14:54.897601 | orchestrator | Wednesday 08 October 2025 16:14:52 +0000 (0:00:00.955) 0:00:02.511 ***** 2025-10-08 16:14:54.897612 | orchestrator | ok: [testbed-node-3] 2025-10-08 16:14:54.897625 | orchestrator | 2025-10-08 16:14:54.897636 | orchestrator | TASK [Define OSD test variables] *********************************************** 2025-10-08 16:14:54.897647 | orchestrator | Wednesday 08 October 2025 16:14:52 +0000 (0:00:00.153) 0:00:02.665 ***** 2025-10-08 16:14:54.897658 | orchestrator | skipping: [testbed-node-3] 2025-10-08 16:14:54.897669 | orchestrator | 2025-10-08 16:14:54.897680 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2025-10-08 16:14:54.897691 | orchestrator | Wednesday 08 October 2025 16:14:52 +0000 (0:00:00.135) 0:00:02.800 ***** 2025-10-08 16:14:54.897702 | orchestrator | skipping: [testbed-node-3] 2025-10-08 16:14:54.897713 | orchestrator | skipping: [testbed-node-4] 2025-10-08 16:14:54.897724 | orchestrator | skipping: [testbed-node-5] 2025-10-08 16:14:54.897735 | orchestrator | 2025-10-08 16:14:54.897760 | orchestrator | TASK [Define OSD test variables] *********************************************** 2025-10-08 16:14:54.897772 | orchestrator | Wednesday 08 October 2025 16:14:53 +0000 (0:00:00.305) 0:00:03.106 ***** 2025-10-08 16:14:54.897783 | orchestrator | ok: [testbed-node-3] 2025-10-08 16:14:54.897794 | orchestrator | 2025-10-08 16:14:54.897805 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2025-10-08 16:14:54.897818 | orchestrator | Wednesday 08 October 2025 16:14:53 +0000 (0:00:00.149) 0:00:03.256 ***** 2025-10-08 16:14:54.897830 | orchestrator | ok: [testbed-node-3] 2025-10-08 16:14:54.897843 | orchestrator | ok: [testbed-node-4] 2025-10-08 16:14:54.897855 | orchestrator | ok: [testbed-node-5] 2025-10-08 16:14:54.897868 | orchestrator | 2025-10-08 16:14:54.897879 | orchestrator | TASK [Calculate total number of OSDs in cluster] ******************************* 2025-10-08 16:14:54.897917 | orchestrator | Wednesday 08 October 2025 16:14:53 +0000 (0:00:00.306) 0:00:03.562 ***** 2025-10-08 16:14:54.897930 | orchestrator | ok: [testbed-node-3] 2025-10-08 16:14:54.897942 | orchestrator | 2025-10-08 16:14:54.897954 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-10-08 16:14:54.897967 | orchestrator | Wednesday 08 October 2025 16:14:54 +0000 (0:00:00.563) 0:00:04.125 ***** 2025-10-08 16:14:54.897979 | orchestrator | ok: [testbed-node-3] 2025-10-08 16:14:54.897991 | orchestrator | ok: [testbed-node-4] 2025-10-08 16:14:54.898003 | orchestrator | ok: [testbed-node-5] 2025-10-08 16:14:54.898070 | orchestrator | 2025-10-08 16:14:54.898084 | orchestrator | TASK [Get list of ceph-osd containers on host] ********************************* 2025-10-08 16:14:54.898097 | orchestrator | Wednesday 08 October 2025 16:14:54 +0000 (0:00:00.533) 0:00:04.659 ***** 2025-10-08 16:14:54.898112 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'b257cfaec22298371f4f26ea7f55cce566a6d9b8afc8b72fc3693b6a904bf711', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 11 minutes (healthy)'})  2025-10-08 16:14:54.898158 | orchestrator | skipping: [testbed-node-3] => (item={'id': '69f2833656f1b3cf25745c5063f64187d25d599683aec1209099ad5c5aa42c8e', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 11 minutes (healthy)'})  2025-10-08 16:14:54.898172 | orchestrator | skipping: [testbed-node-3] => (item={'id': '1d2bd8d781a5ebac5665df578fdf915d2d3b2c280d3ab8902de5fe0dad0793a5', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 12 minutes (healthy)'})  2025-10-08 16:14:54.898197 | orchestrator | skipping: [testbed-node-3] => (item={'id': '93f3baed3d338c3662c4a768d12f7c2b36079c623bd12028aace8458b5d22d60', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 12 minutes (healthy)'})  2025-10-08 16:14:54.898216 | orchestrator | skipping: [testbed-node-3] => (item={'id': '09362e40c9804360548e62b5df825728dbe98dd5fbc1e9b2f8d6a50c49b1b176', 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 15 minutes (healthy)'})  2025-10-08 16:14:54.898247 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'b01bb2f6d5b57036debae4da013aa4dc56e9fc3dcab1c6e74a776fc94134b08d', 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 15 minutes (healthy)'})  2025-10-08 16:14:54.898260 | orchestrator | skipping: [testbed-node-3] => (item={'id': '910a078121364d5ad395334e828bfcaf9635377944a4fb689ebeb48cec1bc618', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 16 minutes'})  2025-10-08 16:14:54.898272 | orchestrator | skipping: [testbed-node-3] => (item={'id': '8340fcd17333e16778339825deb8d36928ca21336c643408f991b6c5c1ffec8b', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 17 minutes'})  2025-10-08 16:14:54.898283 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'bb46d883ec54aae2f24e348ee3e87a417a78c05384192bbaa6885f87ccf77d16', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 17 minutes'})  2025-10-08 16:14:54.898299 | orchestrator | skipping: [testbed-node-3] => (item={'id': '855554a60c8c16c662f4044b6768204f92d5a20db31446551b6acbd69e3a31b6', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-3-rgw0', 'state': 'running', 'status': 'Up 23 minutes'})  2025-10-08 16:14:54.898311 | orchestrator | skipping: [testbed-node-3] => (item={'id': '6145ba44bcd924bf4d6e0cdae797ff6b95a1bbcc87ff0d357929aa8759d907c7', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-3', 'state': 'running', 'status': 'Up 24 minutes'})  2025-10-08 16:14:54.898328 | orchestrator | skipping: [testbed-node-3] => (item={'id': '5909ce0bc4fa577539dc273013b15042815af698d178edbf4771f136878cab85', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-3', 'state': 'running', 'status': 'Up 25 minutes'})  2025-10-08 16:14:54.898340 | orchestrator | ok: [testbed-node-3] => (item={'id': 'f35061cb0c2dfda6d6cee68887c6a4d4459fe3d784edf8db631b8ef5ca2b2b3d', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-5', 'state': 'running', 'status': 'Up 26 minutes'}) 2025-10-08 16:14:54.898352 | orchestrator | ok: [testbed-node-3] => (item={'id': '6b5909bf17095af799e372839ac6037b1468be125c4c9b4c266f4af7febe4ffa', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-0', 'state': 'running', 'status': 'Up 26 minutes'}) 2025-10-08 16:14:54.898364 | orchestrator | skipping: [testbed-node-3] => (item={'id': '3cc35d475bc737eef98e63d5dbd501af3cf3af3527022d79cd067ad96920a8b9', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 30 minutes'})  2025-10-08 16:14:54.898375 | orchestrator | skipping: [testbed-node-3] => (item={'id': '969c5f57badcf68f6479d003240c20b4fbc4ddd40af9bfa6efe453fc18ad5ea0', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 30 minutes (healthy)'})  2025-10-08 16:14:54.898387 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'f385e7fc283596a6169830f2612a9db38d5cf093e73949bc0a75a6b171519b0b', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 31 minutes (healthy)'})  2025-10-08 16:14:54.898406 | orchestrator | skipping: [testbed-node-3] => (item={'id': '84781c9e894acf742fb1200044160081810ccf08c54b0abdf7acdd999f8183f6', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up 32 minutes'})  2025-10-08 16:14:54.898417 | orchestrator | skipping: [testbed-node-3] => (item={'id': '3647d1079aad36660a697bb9cc7ae5b97365f3cf5377f49ed9cb1765f25dd2a2', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 32 minutes'})  2025-10-08 16:14:54.898429 | orchestrator | skipping: [testbed-node-3] => (item={'id': '2cef6b0a6a943734bd7b02f8bb77b74b60348d2813130a08363307854c673a71', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up 32 minutes'})  2025-10-08 16:14:54.898441 | orchestrator | skipping: [testbed-node-4] => (item={'id': '288f6372d7e028268075fabf5301154b3d08e512a5b1f0621d5763758a9df86f', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 11 minutes (healthy)'})  2025-10-08 16:14:54.898461 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'f97cde2d500714c4c4315198e8cbdbed9962eea0cfe5b1d821c75d43a540b28c', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 11 minutes (healthy)'})  2025-10-08 16:14:55.205742 | orchestrator | skipping: [testbed-node-4] => (item={'id': '5e22970f69b7db43eeb4c846ff5d6c890458d6b87651fca016c409847fd205c3', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 12 minutes (healthy)'})  2025-10-08 16:14:55.205837 | orchestrator | skipping: [testbed-node-4] => (item={'id': '3ad9f8e6b164f37c0436d6ea1f5176f1dd43ab826e43f238fd0354d9d1c8c6a9', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 12 minutes (healthy)'})  2025-10-08 16:14:55.205853 | orchestrator | skipping: [testbed-node-4] => (item={'id': '79e45d59e279fd8fcbb78721228dca7a2983a743979cc65e6a5b93e431fdf0da', 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 15 minutes (healthy)'})  2025-10-08 16:14:55.205866 | orchestrator | skipping: [testbed-node-4] => (item={'id': '05df9fe7492eaf05ab388623096d6ea2c74b29632aa63cc666eacbbdc271573d', 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 15 minutes (healthy)'})  2025-10-08 16:14:55.205877 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'dcea27690aa6e4f08359a4306f62be8d77e2ad66cc7d6d635656a2cb17d51691', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 16 minutes'})  2025-10-08 16:14:55.205940 | orchestrator | skipping: [testbed-node-4] => (item={'id': '2981eb64cb8bbb6cfae25612cb9b8ff95ab0d6a15c91b24eff931a58f6adf545', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 17 minutes'})  2025-10-08 16:14:55.205951 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'cd6a2fb34ce764a721b4878011495b064d7a80855e25f63b86bec77149b14046', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 17 minutes'})  2025-10-08 16:14:55.205964 | orchestrator | skipping: [testbed-node-4] => (item={'id': '037ee176fddd436c0e3c3a2a42266944999ab9846dd7854b9a1e91c37537103a', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-4-rgw0', 'state': 'running', 'status': 'Up 23 minutes'})  2025-10-08 16:14:55.205994 | orchestrator | skipping: [testbed-node-4] => (item={'id': '4c27ce468ec5d55471c64ab75228439d83359a01992c94f10097cbe49b635500', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-4', 'state': 'running', 'status': 'Up 24 minutes'})  2025-10-08 16:14:55.206077 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'a38a215c77aab2dfcf0e958a1e70102911c9138bd62e6ab4604911ef41913779', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-4', 'state': 'running', 'status': 'Up 25 minutes'})  2025-10-08 16:14:55.206092 | orchestrator | ok: [testbed-node-4] => (item={'id': 'ae764f5864c102f278f353aa5942c94ea09172e88a2045681d04906b6db97c8d', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-4', 'state': 'running', 'status': 'Up 26 minutes'}) 2025-10-08 16:14:55.206104 | orchestrator | ok: [testbed-node-4] => (item={'id': '5206d113df4372e3c4f6f9738ef9afcc8e2b6e68fc776d311ef7d76e1c9fa785', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-1', 'state': 'running', 'status': 'Up 26 minutes'}) 2025-10-08 16:14:55.206115 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'fc664e0a8efc19378687e22e852ba39f34f549ce9fa95704cf605f99769bdac4', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 30 minutes'})  2025-10-08 16:14:55.206127 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'e505dadb755eb60cfcbacb5872138f59e17b31c1631fc15f7d78c48d99ff65e2', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 30 minutes (healthy)'})  2025-10-08 16:14:55.206138 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'd067ec3c0dc4add575e270c76870c2cd67e896ddcadd4c1749eeb1f4a3d2cce9', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 31 minutes (healthy)'})  2025-10-08 16:14:55.206168 | orchestrator | skipping: [testbed-node-4] => (item={'id': '4b6079837290918af66d60f831f3e16ecb10638772fc958f4fbeeff4303f61a4', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up 32 minutes'})  2025-10-08 16:14:55.206181 | orchestrator | skipping: [testbed-node-4] => (item={'id': '67227ce4f1e92faf7add29bd35e105ca182da9507f6384f0ce5e2830cbc4aabc', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 32 minutes'})  2025-10-08 16:14:55.206192 | orchestrator | skipping: [testbed-node-4] => (item={'id': '9079d05c40dd49e59a528f62691334b0da2d58775e9413f74ae31cd3e7fa5b5e', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up 32 minutes'})  2025-10-08 16:14:55.206204 | orchestrator | skipping: [testbed-node-5] => (item={'id': '5f6a600d150ca216628a455a6f34ed2d49802d624a8c1b911a17608bea22ca4b', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 11 minutes (healthy)'})  2025-10-08 16:14:55.206216 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'b8f75cd99f743112d26155ebbc6de8e4e5c4956c65384622834f58fdf5dac40e', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 11 minutes (healthy)'})  2025-10-08 16:14:55.206233 | orchestrator | skipping: [testbed-node-5] => (item={'id': '7989108e50666962ee0c96f180ff98d7f9d10aa6eb03e1e6442f87f70edb0248', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 12 minutes (healthy)'})  2025-10-08 16:14:55.206245 | orchestrator | skipping: [testbed-node-5] => (item={'id': '047ac2e202fe88817299bc8d101140f7a51cee6b125851dc1ef7c884f3007d5a', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 12 minutes (healthy)'})  2025-10-08 16:14:55.206256 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'bfcc475c261aedf9f4ae7524702c8965973860e0154035b975b80bbf76cd1016', 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 15 minutes (healthy)'})  2025-10-08 16:14:55.206276 | orchestrator | skipping: [testbed-node-5] => (item={'id': '4771047b28cd4d2ad4fdb64e0d690326824ef638d7e63231d8187d16efb28742', 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 15 minutes (healthy)'})  2025-10-08 16:14:55.206290 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'c27463a68c94316a8301d079c9df8279ed309a9a9ab35c26385574923e18cceb', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 16 minutes'})  2025-10-08 16:14:55.206303 | orchestrator | skipping: [testbed-node-5] => (item={'id': '210d0ae9806edc2482ed18e5a07e67dd8d0ce20bafa7c90ad60cd916aad4da16', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 17 minutes'})  2025-10-08 16:14:55.206316 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'da3b9b7071b15eaabaf492049dfb7daaacd44c5ecbdcdf83c77c82114afaa9b9', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 17 minutes'})  2025-10-08 16:14:55.206328 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'bc695f44601b16c94224522c8958ca897fd7beb082c3e1a35488e77eccee0f20', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-5-rgw0', 'state': 'running', 'status': 'Up 23 minutes'})  2025-10-08 16:14:55.206341 | orchestrator | skipping: [testbed-node-5] => (item={'id': '80e1cc36986fa4bcbb0e0c8e7ed7fde7eed4dd18decfca270fa0a8ba541399c5', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-5', 'state': 'running', 'status': 'Up 24 minutes'})  2025-10-08 16:14:55.206354 | orchestrator | skipping: [testbed-node-5] => (item={'id': '88be2f1d7a1daa24db65af8b9238eb7f32849f7de779e8594d748f1b625acf17', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-5', 'state': 'running', 'status': 'Up 25 minutes'})  2025-10-08 16:14:55.206373 | orchestrator | ok: [testbed-node-5] => (item={'id': '973f1fc371e1f0767ccc9bc2adfbf63d2b3fc4ca0937b1f9dbc858a7d367c2eb', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-3', 'state': 'running', 'status': 'Up 26 minutes'}) 2025-10-08 16:15:03.458252 | orchestrator | ok: [testbed-node-5] => (item={'id': 'd2dee7f270e38586340425a9d29e22a02a69ac22e9b1cf27bae3ef499afbf256', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-2', 'state': 'running', 'status': 'Up 26 minutes'}) 2025-10-08 16:15:03.458366 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'ff3b31cc40fbaf64c6eb2ad4e6908b62b521b0fe750c901fa20f0f0063354e07', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 30 minutes'})  2025-10-08 16:15:03.458384 | orchestrator | skipping: [testbed-node-5] => (item={'id': '874e6aa8aed7e2c05cfc3701ec0a12bc28c264c7b444feb845a1868e1c7a3035', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 30 minutes (healthy)'})  2025-10-08 16:15:03.458398 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'decd30627e711642369cbbac9a53f8997b825cab9e74507ae0277d638bc62c05', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 31 minutes (healthy)'})  2025-10-08 16:15:03.458410 | orchestrator | skipping: [testbed-node-5] => (item={'id': '3643497a339edae55b6b66972d959c3888aa8879a733dcc181f907944331254e', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up 32 minutes'})  2025-10-08 16:15:03.458439 | orchestrator | skipping: [testbed-node-5] => (item={'id': '7ea2dfac3107d1c83a9e8ca6751203e9c65b3117d17a94ba0bd6e47c6b7327a2', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 32 minutes'})  2025-10-08 16:15:03.458472 | orchestrator | skipping: [testbed-node-5] => (item={'id': '398ad2e8f57a11fd7240a1d5dc61392674b82fb171759144363b60e0e74dc8ba', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up 32 minutes'})  2025-10-08 16:15:03.458485 | orchestrator | 2025-10-08 16:15:03.458498 | orchestrator | TASK [Get count of ceph-osd containers on host] ******************************** 2025-10-08 16:15:03.458511 | orchestrator | Wednesday 08 October 2025 16:14:55 +0000 (0:00:00.539) 0:00:05.198 ***** 2025-10-08 16:15:03.458522 | orchestrator | ok: [testbed-node-3] 2025-10-08 16:15:03.458533 | orchestrator | ok: [testbed-node-4] 2025-10-08 16:15:03.458544 | orchestrator | ok: [testbed-node-5] 2025-10-08 16:15:03.458555 | orchestrator | 2025-10-08 16:15:03.458567 | orchestrator | TASK [Set test result to failed when count of containers is wrong] ************* 2025-10-08 16:15:03.458578 | orchestrator | Wednesday 08 October 2025 16:14:55 +0000 (0:00:00.329) 0:00:05.528 ***** 2025-10-08 16:15:03.458589 | orchestrator | skipping: [testbed-node-3] 2025-10-08 16:15:03.458600 | orchestrator | skipping: [testbed-node-4] 2025-10-08 16:15:03.458611 | orchestrator | skipping: [testbed-node-5] 2025-10-08 16:15:03.458622 | orchestrator | 2025-10-08 16:15:03.458633 | orchestrator | TASK [Set test result to passed if count matches] ****************************** 2025-10-08 16:15:03.458644 | orchestrator | Wednesday 08 October 2025 16:14:55 +0000 (0:00:00.286) 0:00:05.814 ***** 2025-10-08 16:15:03.458655 | orchestrator | ok: [testbed-node-3] 2025-10-08 16:15:03.458666 | orchestrator | ok: [testbed-node-4] 2025-10-08 16:15:03.458677 | orchestrator | ok: [testbed-node-5] 2025-10-08 16:15:03.458687 | orchestrator | 2025-10-08 16:15:03.458698 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-10-08 16:15:03.458709 | orchestrator | Wednesday 08 October 2025 16:14:56 +0000 (0:00:00.543) 0:00:06.357 ***** 2025-10-08 16:15:03.458720 | orchestrator | ok: [testbed-node-3] 2025-10-08 16:15:03.458731 | orchestrator | ok: [testbed-node-4] 2025-10-08 16:15:03.458741 | orchestrator | ok: [testbed-node-5] 2025-10-08 16:15:03.458752 | orchestrator | 2025-10-08 16:15:03.458763 | orchestrator | TASK [Get list of ceph-osd containers that are not running] ******************** 2025-10-08 16:15:03.458774 | orchestrator | Wednesday 08 October 2025 16:14:56 +0000 (0:00:00.288) 0:00:06.646 ***** 2025-10-08 16:15:03.458788 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-5', 'osd_id': '5', 'state': 'running'})  2025-10-08 16:15:03.458802 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-0', 'osd_id': '0', 'state': 'running'})  2025-10-08 16:15:03.458815 | orchestrator | skipping: [testbed-node-3] 2025-10-08 16:15:03.458828 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-4', 'osd_id': '4', 'state': 'running'})  2025-10-08 16:15:03.458840 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-1', 'osd_id': '1', 'state': 'running'})  2025-10-08 16:15:03.458853 | orchestrator | skipping: [testbed-node-4] 2025-10-08 16:15:03.458866 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-3', 'osd_id': '3', 'state': 'running'})  2025-10-08 16:15:03.458905 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-2', 'osd_id': '2', 'state': 'running'})  2025-10-08 16:15:03.458918 | orchestrator | skipping: [testbed-node-5] 2025-10-08 16:15:03.458930 | orchestrator | 2025-10-08 16:15:03.458942 | orchestrator | TASK [Get count of ceph-osd containers that are not running] ******************* 2025-10-08 16:15:03.458953 | orchestrator | Wednesday 08 October 2025 16:14:56 +0000 (0:00:00.316) 0:00:06.962 ***** 2025-10-08 16:15:03.458964 | orchestrator | ok: [testbed-node-3] 2025-10-08 16:15:03.458974 | orchestrator | ok: [testbed-node-4] 2025-10-08 16:15:03.458985 | orchestrator | ok: [testbed-node-5] 2025-10-08 16:15:03.458996 | orchestrator | 2025-10-08 16:15:03.459025 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2025-10-08 16:15:03.459037 | orchestrator | Wednesday 08 October 2025 16:14:57 +0000 (0:00:00.295) 0:00:07.257 ***** 2025-10-08 16:15:03.459048 | orchestrator | skipping: [testbed-node-3] 2025-10-08 16:15:03.459059 | orchestrator | skipping: [testbed-node-4] 2025-10-08 16:15:03.459078 | orchestrator | skipping: [testbed-node-5] 2025-10-08 16:15:03.459089 | orchestrator | 2025-10-08 16:15:03.459100 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2025-10-08 16:15:03.459111 | orchestrator | Wednesday 08 October 2025 16:14:57 +0000 (0:00:00.472) 0:00:07.729 ***** 2025-10-08 16:15:03.459122 | orchestrator | skipping: [testbed-node-3] 2025-10-08 16:15:03.459133 | orchestrator | skipping: [testbed-node-4] 2025-10-08 16:15:03.459143 | orchestrator | skipping: [testbed-node-5] 2025-10-08 16:15:03.459154 | orchestrator | 2025-10-08 16:15:03.459165 | orchestrator | TASK [Set test result to passed if all containers are running] ***************** 2025-10-08 16:15:03.459176 | orchestrator | Wednesday 08 October 2025 16:14:58 +0000 (0:00:00.301) 0:00:08.031 ***** 2025-10-08 16:15:03.459187 | orchestrator | ok: [testbed-node-3] 2025-10-08 16:15:03.459198 | orchestrator | ok: [testbed-node-4] 2025-10-08 16:15:03.459209 | orchestrator | ok: [testbed-node-5] 2025-10-08 16:15:03.459220 | orchestrator | 2025-10-08 16:15:03.459231 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-10-08 16:15:03.459242 | orchestrator | Wednesday 08 October 2025 16:14:58 +0000 (0:00:00.309) 0:00:08.340 ***** 2025-10-08 16:15:03.459253 | orchestrator | skipping: [testbed-node-3] 2025-10-08 16:15:03.459263 | orchestrator | 2025-10-08 16:15:03.459275 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-10-08 16:15:03.459286 | orchestrator | Wednesday 08 October 2025 16:14:58 +0000 (0:00:00.253) 0:00:08.594 ***** 2025-10-08 16:15:03.459297 | orchestrator | skipping: [testbed-node-3] 2025-10-08 16:15:03.459308 | orchestrator | 2025-10-08 16:15:03.459319 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-10-08 16:15:03.459330 | orchestrator | Wednesday 08 October 2025 16:14:58 +0000 (0:00:00.247) 0:00:08.842 ***** 2025-10-08 16:15:03.459341 | orchestrator | skipping: [testbed-node-3] 2025-10-08 16:15:03.459352 | orchestrator | 2025-10-08 16:15:03.459364 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-10-08 16:15:03.459374 | orchestrator | Wednesday 08 October 2025 16:14:59 +0000 (0:00:00.262) 0:00:09.105 ***** 2025-10-08 16:15:03.459385 | orchestrator | 2025-10-08 16:15:03.459396 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-10-08 16:15:03.459407 | orchestrator | Wednesday 08 October 2025 16:14:59 +0000 (0:00:00.081) 0:00:09.187 ***** 2025-10-08 16:15:03.459418 | orchestrator | 2025-10-08 16:15:03.459429 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-10-08 16:15:03.459440 | orchestrator | Wednesday 08 October 2025 16:14:59 +0000 (0:00:00.281) 0:00:09.468 ***** 2025-10-08 16:15:03.459451 | orchestrator | 2025-10-08 16:15:03.459462 | orchestrator | TASK [Print report file information] ******************************************* 2025-10-08 16:15:03.459473 | orchestrator | Wednesday 08 October 2025 16:14:59 +0000 (0:00:00.072) 0:00:09.540 ***** 2025-10-08 16:15:03.459484 | orchestrator | skipping: [testbed-node-3] 2025-10-08 16:15:03.459495 | orchestrator | 2025-10-08 16:15:03.459506 | orchestrator | TASK [Fail early due to containers not running] ******************************** 2025-10-08 16:15:03.459516 | orchestrator | Wednesday 08 October 2025 16:14:59 +0000 (0:00:00.271) 0:00:09.812 ***** 2025-10-08 16:15:03.459527 | orchestrator | skipping: [testbed-node-3] 2025-10-08 16:15:03.459538 | orchestrator | 2025-10-08 16:15:03.459549 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-10-08 16:15:03.459560 | orchestrator | Wednesday 08 October 2025 16:15:00 +0000 (0:00:00.259) 0:00:10.071 ***** 2025-10-08 16:15:03.459571 | orchestrator | ok: [testbed-node-3] 2025-10-08 16:15:03.459582 | orchestrator | ok: [testbed-node-4] 2025-10-08 16:15:03.459593 | orchestrator | ok: [testbed-node-5] 2025-10-08 16:15:03.459603 | orchestrator | 2025-10-08 16:15:03.459614 | orchestrator | TASK [Set _mon_hostname fact] ************************************************** 2025-10-08 16:15:03.459625 | orchestrator | Wednesday 08 October 2025 16:15:00 +0000 (0:00:00.317) 0:00:10.388 ***** 2025-10-08 16:15:03.459636 | orchestrator | ok: [testbed-node-3] 2025-10-08 16:15:03.459647 | orchestrator | 2025-10-08 16:15:03.459668 | orchestrator | TASK [Get ceph osd tree] ******************************************************* 2025-10-08 16:15:03.459679 | orchestrator | Wednesday 08 October 2025 16:15:00 +0000 (0:00:00.226) 0:00:10.614 ***** 2025-10-08 16:15:03.459690 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-10-08 16:15:03.459701 | orchestrator | 2025-10-08 16:15:03.459712 | orchestrator | TASK [Parse osd tree from JSON] ************************************************ 2025-10-08 16:15:03.459723 | orchestrator | Wednesday 08 October 2025 16:15:02 +0000 (0:00:01.630) 0:00:12.245 ***** 2025-10-08 16:15:03.459734 | orchestrator | ok: [testbed-node-3] 2025-10-08 16:15:03.459745 | orchestrator | 2025-10-08 16:15:03.459756 | orchestrator | TASK [Get OSDs that are not up or in] ****************************************** 2025-10-08 16:15:03.459767 | orchestrator | Wednesday 08 October 2025 16:15:02 +0000 (0:00:00.139) 0:00:12.385 ***** 2025-10-08 16:15:03.459777 | orchestrator | ok: [testbed-node-3] 2025-10-08 16:15:03.459788 | orchestrator | 2025-10-08 16:15:03.459799 | orchestrator | TASK [Fail test if OSDs are not up or in] ************************************** 2025-10-08 16:15:03.459810 | orchestrator | Wednesday 08 October 2025 16:15:02 +0000 (0:00:00.314) 0:00:12.699 ***** 2025-10-08 16:15:03.459821 | orchestrator | skipping: [testbed-node-3] 2025-10-08 16:15:03.459832 | orchestrator | 2025-10-08 16:15:03.459843 | orchestrator | TASK [Pass test if OSDs are all up and in] ************************************* 2025-10-08 16:15:03.459854 | orchestrator | Wednesday 08 October 2025 16:15:02 +0000 (0:00:00.121) 0:00:12.821 ***** 2025-10-08 16:15:03.459865 | orchestrator | ok: [testbed-node-3] 2025-10-08 16:15:03.459907 | orchestrator | 2025-10-08 16:15:03.459956 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-10-08 16:15:03.459969 | orchestrator | Wednesday 08 October 2025 16:15:03 +0000 (0:00:00.340) 0:00:13.162 ***** 2025-10-08 16:15:03.459980 | orchestrator | ok: [testbed-node-3] 2025-10-08 16:15:03.459991 | orchestrator | ok: [testbed-node-4] 2025-10-08 16:15:03.460001 | orchestrator | ok: [testbed-node-5] 2025-10-08 16:15:03.460012 | orchestrator | 2025-10-08 16:15:03.460023 | orchestrator | TASK [List ceph LVM volumes and collect data] ********************************** 2025-10-08 16:15:03.460041 | orchestrator | Wednesday 08 October 2025 16:15:03 +0000 (0:00:00.306) 0:00:13.469 ***** 2025-10-08 16:15:16.343461 | orchestrator | changed: [testbed-node-3] 2025-10-08 16:15:16.343555 | orchestrator | changed: [testbed-node-5] 2025-10-08 16:15:16.343569 | orchestrator | changed: [testbed-node-4] 2025-10-08 16:15:16.343580 | orchestrator | 2025-10-08 16:15:16.343593 | orchestrator | TASK [Parse LVM data as JSON] ************************************************** 2025-10-08 16:15:16.343605 | orchestrator | Wednesday 08 October 2025 16:15:05 +0000 (0:00:02.406) 0:00:15.875 ***** 2025-10-08 16:15:16.343616 | orchestrator | ok: [testbed-node-3] 2025-10-08 16:15:16.343628 | orchestrator | ok: [testbed-node-4] 2025-10-08 16:15:16.343639 | orchestrator | ok: [testbed-node-5] 2025-10-08 16:15:16.343650 | orchestrator | 2025-10-08 16:15:16.343661 | orchestrator | TASK [Get unencrypted and encrypted OSDs] ************************************** 2025-10-08 16:15:16.343672 | orchestrator | Wednesday 08 October 2025 16:15:06 +0000 (0:00:00.330) 0:00:16.205 ***** 2025-10-08 16:15:16.343683 | orchestrator | ok: [testbed-node-3] 2025-10-08 16:15:16.343693 | orchestrator | ok: [testbed-node-4] 2025-10-08 16:15:16.343704 | orchestrator | ok: [testbed-node-5] 2025-10-08 16:15:16.343715 | orchestrator | 2025-10-08 16:15:16.343725 | orchestrator | TASK [Fail if count of encrypted OSDs does not match] ************************** 2025-10-08 16:15:16.343736 | orchestrator | Wednesday 08 October 2025 16:15:06 +0000 (0:00:00.753) 0:00:16.958 ***** 2025-10-08 16:15:16.343747 | orchestrator | skipping: [testbed-node-3] 2025-10-08 16:15:16.343758 | orchestrator | skipping: [testbed-node-4] 2025-10-08 16:15:16.343769 | orchestrator | skipping: [testbed-node-5] 2025-10-08 16:15:16.343781 | orchestrator | 2025-10-08 16:15:16.343791 | orchestrator | TASK [Pass if count of encrypted OSDs equals count of OSDs] ******************** 2025-10-08 16:15:16.343802 | orchestrator | Wednesday 08 October 2025 16:15:07 +0000 (0:00:00.366) 0:00:17.325 ***** 2025-10-08 16:15:16.343813 | orchestrator | ok: [testbed-node-3] 2025-10-08 16:15:16.343847 | orchestrator | ok: [testbed-node-4] 2025-10-08 16:15:16.343911 | orchestrator | ok: [testbed-node-5] 2025-10-08 16:15:16.343923 | orchestrator | 2025-10-08 16:15:16.343935 | orchestrator | TASK [Fail if count of unencrypted OSDs does not match] ************************ 2025-10-08 16:15:16.343961 | orchestrator | Wednesday 08 October 2025 16:15:07 +0000 (0:00:00.326) 0:00:17.651 ***** 2025-10-08 16:15:16.343972 | orchestrator | skipping: [testbed-node-3] 2025-10-08 16:15:16.343983 | orchestrator | skipping: [testbed-node-4] 2025-10-08 16:15:16.343993 | orchestrator | skipping: [testbed-node-5] 2025-10-08 16:15:16.344005 | orchestrator | 2025-10-08 16:15:16.344018 | orchestrator | TASK [Pass if count of unencrypted OSDs equals count of OSDs] ****************** 2025-10-08 16:15:16.344030 | orchestrator | Wednesday 08 October 2025 16:15:07 +0000 (0:00:00.322) 0:00:17.973 ***** 2025-10-08 16:15:16.344042 | orchestrator | skipping: [testbed-node-3] 2025-10-08 16:15:16.344054 | orchestrator | skipping: [testbed-node-4] 2025-10-08 16:15:16.344067 | orchestrator | skipping: [testbed-node-5] 2025-10-08 16:15:16.344079 | orchestrator | 2025-10-08 16:15:16.344091 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-10-08 16:15:16.344104 | orchestrator | Wednesday 08 October 2025 16:15:08 +0000 (0:00:00.488) 0:00:18.462 ***** 2025-10-08 16:15:16.344116 | orchestrator | ok: [testbed-node-3] 2025-10-08 16:15:16.344128 | orchestrator | ok: [testbed-node-4] 2025-10-08 16:15:16.344141 | orchestrator | ok: [testbed-node-5] 2025-10-08 16:15:16.344152 | orchestrator | 2025-10-08 16:15:16.344165 | orchestrator | TASK [Get CRUSH node data of each OSD host and root node childs] *************** 2025-10-08 16:15:16.344176 | orchestrator | Wednesday 08 October 2025 16:15:08 +0000 (0:00:00.501) 0:00:18.964 ***** 2025-10-08 16:15:16.344189 | orchestrator | ok: [testbed-node-3] 2025-10-08 16:15:16.344201 | orchestrator | ok: [testbed-node-4] 2025-10-08 16:15:16.344213 | orchestrator | ok: [testbed-node-5] 2025-10-08 16:15:16.344225 | orchestrator | 2025-10-08 16:15:16.344237 | orchestrator | TASK [Calculate sub test expression results] *********************************** 2025-10-08 16:15:16.344249 | orchestrator | Wednesday 08 October 2025 16:15:09 +0000 (0:00:00.534) 0:00:19.499 ***** 2025-10-08 16:15:16.344262 | orchestrator | ok: [testbed-node-3] 2025-10-08 16:15:16.344273 | orchestrator | ok: [testbed-node-4] 2025-10-08 16:15:16.344285 | orchestrator | ok: [testbed-node-5] 2025-10-08 16:15:16.344298 | orchestrator | 2025-10-08 16:15:16.344310 | orchestrator | TASK [Fail test if any sub test failed] **************************************** 2025-10-08 16:15:16.344323 | orchestrator | Wednesday 08 October 2025 16:15:09 +0000 (0:00:00.333) 0:00:19.832 ***** 2025-10-08 16:15:16.344335 | orchestrator | skipping: [testbed-node-3] 2025-10-08 16:15:16.344348 | orchestrator | skipping: [testbed-node-4] 2025-10-08 16:15:16.344360 | orchestrator | skipping: [testbed-node-5] 2025-10-08 16:15:16.344370 | orchestrator | 2025-10-08 16:15:16.344381 | orchestrator | TASK [Pass test if no sub test failed] ***************************************** 2025-10-08 16:15:16.344392 | orchestrator | Wednesday 08 October 2025 16:15:10 +0000 (0:00:00.593) 0:00:20.426 ***** 2025-10-08 16:15:16.344402 | orchestrator | ok: [testbed-node-3] 2025-10-08 16:15:16.344413 | orchestrator | ok: [testbed-node-4] 2025-10-08 16:15:16.344423 | orchestrator | ok: [testbed-node-5] 2025-10-08 16:15:16.344434 | orchestrator | 2025-10-08 16:15:16.344444 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-10-08 16:15:16.344455 | orchestrator | Wednesday 08 October 2025 16:15:10 +0000 (0:00:00.328) 0:00:20.754 ***** 2025-10-08 16:15:16.344466 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-10-08 16:15:16.344477 | orchestrator | 2025-10-08 16:15:16.344487 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-10-08 16:15:16.344498 | orchestrator | Wednesday 08 October 2025 16:15:11 +0000 (0:00:00.285) 0:00:21.040 ***** 2025-10-08 16:15:16.344509 | orchestrator | skipping: [testbed-node-3] 2025-10-08 16:15:16.344520 | orchestrator | 2025-10-08 16:15:16.344530 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-10-08 16:15:16.344541 | orchestrator | Wednesday 08 October 2025 16:15:11 +0000 (0:00:00.250) 0:00:21.290 ***** 2025-10-08 16:15:16.344560 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-10-08 16:15:16.344571 | orchestrator | 2025-10-08 16:15:16.344582 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-10-08 16:15:16.344593 | orchestrator | Wednesday 08 October 2025 16:15:13 +0000 (0:00:01.740) 0:00:23.030 ***** 2025-10-08 16:15:16.344604 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-10-08 16:15:16.344614 | orchestrator | 2025-10-08 16:15:16.344625 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-10-08 16:15:16.344636 | orchestrator | Wednesday 08 October 2025 16:15:13 +0000 (0:00:00.292) 0:00:23.323 ***** 2025-10-08 16:15:16.344664 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-10-08 16:15:16.344676 | orchestrator | 2025-10-08 16:15:16.344687 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-10-08 16:15:16.344698 | orchestrator | Wednesday 08 October 2025 16:15:13 +0000 (0:00:00.259) 0:00:23.583 ***** 2025-10-08 16:15:16.344708 | orchestrator | 2025-10-08 16:15:16.344720 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-10-08 16:15:16.344739 | orchestrator | Wednesday 08 October 2025 16:15:13 +0000 (0:00:00.069) 0:00:23.653 ***** 2025-10-08 16:15:16.344757 | orchestrator | 2025-10-08 16:15:16.344775 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-10-08 16:15:16.344792 | orchestrator | Wednesday 08 October 2025 16:15:13 +0000 (0:00:00.067) 0:00:23.720 ***** 2025-10-08 16:15:16.344810 | orchestrator | 2025-10-08 16:15:16.344827 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-10-08 16:15:16.344844 | orchestrator | Wednesday 08 October 2025 16:15:13 +0000 (0:00:00.084) 0:00:23.805 ***** 2025-10-08 16:15:16.344885 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-10-08 16:15:16.344903 | orchestrator | 2025-10-08 16:15:16.344920 | orchestrator | TASK [Print report file information] ******************************************* 2025-10-08 16:15:16.344938 | orchestrator | Wednesday 08 October 2025 16:15:15 +0000 (0:00:01.755) 0:00:25.561 ***** 2025-10-08 16:15:16.344957 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => { 2025-10-08 16:15:16.344977 | orchestrator |  "msg": [ 2025-10-08 16:15:16.344990 | orchestrator |  "Validator run completed.", 2025-10-08 16:15:16.345001 | orchestrator |  "You can find the report file here:", 2025-10-08 16:15:16.345012 | orchestrator |  "/opt/reports/validator/ceph-osds-validator-2025-10-08T16:14:50+00:00-report.json", 2025-10-08 16:15:16.345024 | orchestrator |  "on the following host:", 2025-10-08 16:15:16.345042 | orchestrator |  "testbed-manager" 2025-10-08 16:15:16.345054 | orchestrator |  ] 2025-10-08 16:15:16.345065 | orchestrator | } 2025-10-08 16:15:16.345076 | orchestrator | 2025-10-08 16:15:16.345086 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-08 16:15:16.345098 | orchestrator | testbed-node-3 : ok=35  changed=4  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2025-10-08 16:15:16.345110 | orchestrator | testbed-node-4 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-10-08 16:15:16.345121 | orchestrator | testbed-node-5 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-10-08 16:15:16.345132 | orchestrator | 2025-10-08 16:15:16.345143 | orchestrator | 2025-10-08 16:15:16.345154 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-08 16:15:16.345165 | orchestrator | Wednesday 08 October 2025 16:15:15 +0000 (0:00:00.414) 0:00:25.975 ***** 2025-10-08 16:15:16.345175 | orchestrator | =============================================================================== 2025-10-08 16:15:16.345186 | orchestrator | List ceph LVM volumes and collect data ---------------------------------- 2.41s 2025-10-08 16:15:16.345196 | orchestrator | Write report file ------------------------------------------------------- 1.76s 2025-10-08 16:15:16.345216 | orchestrator | Aggregate test results step one ----------------------------------------- 1.74s 2025-10-08 16:15:16.345226 | orchestrator | Get ceph osd tree ------------------------------------------------------- 1.63s 2025-10-08 16:15:16.345237 | orchestrator | Create report output directory ------------------------------------------ 0.96s 2025-10-08 16:15:16.345247 | orchestrator | Get unencrypted and encrypted OSDs -------------------------------------- 0.75s 2025-10-08 16:15:16.345258 | orchestrator | Get timestamp for report file ------------------------------------------- 0.70s 2025-10-08 16:15:16.345269 | orchestrator | Fail test if any sub test failed ---------------------------------------- 0.59s 2025-10-08 16:15:16.345279 | orchestrator | Calculate total number of OSDs in cluster ------------------------------- 0.56s 2025-10-08 16:15:16.345290 | orchestrator | Set test result to passed if count matches ------------------------------ 0.54s 2025-10-08 16:15:16.345301 | orchestrator | Get list of ceph-osd containers on host --------------------------------- 0.54s 2025-10-08 16:15:16.345311 | orchestrator | Get CRUSH node data of each OSD host and root node childs --------------- 0.53s 2025-10-08 16:15:16.345322 | orchestrator | Prepare test data ------------------------------------------------------- 0.53s 2025-10-08 16:15:16.345332 | orchestrator | Prepare test data ------------------------------------------------------- 0.50s 2025-10-08 16:15:16.345343 | orchestrator | Pass if count of unencrypted OSDs equals count of OSDs ------------------ 0.49s 2025-10-08 16:15:16.345353 | orchestrator | Set test result to failed if an OSD is not running ---------------------- 0.47s 2025-10-08 16:15:16.345364 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.44s 2025-10-08 16:15:16.345374 | orchestrator | Flush handlers ---------------------------------------------------------- 0.43s 2025-10-08 16:15:16.345385 | orchestrator | Print report file information ------------------------------------------- 0.41s 2025-10-08 16:15:16.345395 | orchestrator | Fail if count of encrypted OSDs does not match -------------------------- 0.37s 2025-10-08 16:15:16.675468 | orchestrator | + sh -c /opt/configuration/scripts/check/200-infrastructure.sh 2025-10-08 16:15:16.682356 | orchestrator | + set -e 2025-10-08 16:15:16.683035 | orchestrator | + source /opt/manager-vars.sh 2025-10-08 16:15:16.683073 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-10-08 16:15:16.683087 | orchestrator | ++ NUMBER_OF_NODES=6 2025-10-08 16:15:16.683099 | orchestrator | ++ export CEPH_VERSION=reef 2025-10-08 16:15:16.683111 | orchestrator | ++ CEPH_VERSION=reef 2025-10-08 16:15:16.683124 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-10-08 16:15:16.683138 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-10-08 16:15:16.683150 | orchestrator | ++ export MANAGER_VERSION=latest 2025-10-08 16:15:16.683162 | orchestrator | ++ MANAGER_VERSION=latest 2025-10-08 16:15:16.683174 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-10-08 16:15:16.683187 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-10-08 16:15:16.683199 | orchestrator | ++ export ARA=false 2025-10-08 16:15:16.683212 | orchestrator | ++ ARA=false 2025-10-08 16:15:16.683225 | orchestrator | ++ export DEPLOY_MODE=manager 2025-10-08 16:15:16.683236 | orchestrator | ++ DEPLOY_MODE=manager 2025-10-08 16:15:16.683248 | orchestrator | ++ export TEMPEST=false 2025-10-08 16:15:16.683261 | orchestrator | ++ TEMPEST=false 2025-10-08 16:15:16.683273 | orchestrator | ++ export IS_ZUUL=true 2025-10-08 16:15:16.683283 | orchestrator | ++ IS_ZUUL=true 2025-10-08 16:15:16.683294 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.175 2025-10-08 16:15:16.683305 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.175 2025-10-08 16:15:16.683316 | orchestrator | ++ export EXTERNAL_API=false 2025-10-08 16:15:16.683326 | orchestrator | ++ EXTERNAL_API=false 2025-10-08 16:15:16.683336 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-10-08 16:15:16.683347 | orchestrator | ++ IMAGE_USER=ubuntu 2025-10-08 16:15:16.683357 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-10-08 16:15:16.683368 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-10-08 16:15:16.683378 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-10-08 16:15:16.683389 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-10-08 16:15:16.683399 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-10-08 16:15:16.683410 | orchestrator | + source /etc/os-release 2025-10-08 16:15:16.683420 | orchestrator | ++ PRETTY_NAME='Ubuntu 24.04.3 LTS' 2025-10-08 16:15:16.683431 | orchestrator | ++ NAME=Ubuntu 2025-10-08 16:15:16.683442 | orchestrator | ++ VERSION_ID=24.04 2025-10-08 16:15:16.683452 | orchestrator | ++ VERSION='24.04.3 LTS (Noble Numbat)' 2025-10-08 16:15:16.683498 | orchestrator | ++ VERSION_CODENAME=noble 2025-10-08 16:15:16.683518 | orchestrator | ++ ID=ubuntu 2025-10-08 16:15:16.683537 | orchestrator | ++ ID_LIKE=debian 2025-10-08 16:15:16.683555 | orchestrator | ++ HOME_URL=https://www.ubuntu.com/ 2025-10-08 16:15:16.683574 | orchestrator | ++ SUPPORT_URL=https://help.ubuntu.com/ 2025-10-08 16:15:16.683594 | orchestrator | ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 2025-10-08 16:15:16.683613 | orchestrator | ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 2025-10-08 16:15:16.683629 | orchestrator | ++ UBUNTU_CODENAME=noble 2025-10-08 16:15:16.683641 | orchestrator | ++ LOGO=ubuntu-logo 2025-10-08 16:15:16.683652 | orchestrator | + [[ ubuntu == \u\b\u\n\t\u ]] 2025-10-08 16:15:16.683663 | orchestrator | + packages='libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client' 2025-10-08 16:15:16.683675 | orchestrator | + dpkg -s libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2025-10-08 16:15:16.717947 | orchestrator | + sudo apt-get install -y libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2025-10-08 16:15:41.793627 | orchestrator | 2025-10-08 16:15:41.793746 | orchestrator | # Status of Elasticsearch 2025-10-08 16:15:41.793764 | orchestrator | 2025-10-08 16:15:41.793777 | orchestrator | + pushd /opt/configuration/contrib 2025-10-08 16:15:41.793790 | orchestrator | + echo 2025-10-08 16:15:41.793802 | orchestrator | + echo '# Status of Elasticsearch' 2025-10-08 16:15:41.793813 | orchestrator | + echo 2025-10-08 16:15:41.793824 | orchestrator | + bash nagios-plugins/check_elasticsearch -H api-int.testbed.osism.xyz -s 2025-10-08 16:15:41.980697 | orchestrator | OK - elasticsearch (kolla_logging) is running. status: green; timed_out: false; number_of_nodes: 3; number_of_data_nodes: 3; active_primary_shards: 9; active_shards: 22; relocating_shards: 0; initializing_shards: 0; delayed_unassigned_shards: 0; unassigned_shards: 0 | 'active_primary'=9 'active'=22 'relocating'=0 'init'=0 'delay_unass'=0 'unass'=0 2025-10-08 16:15:41.981526 | orchestrator | 2025-10-08 16:15:41.981554 | orchestrator | + echo 2025-10-08 16:15:41.981568 | orchestrator | + echo '# Status of MariaDB' 2025-10-08 16:15:41.982585 | orchestrator | # Status of MariaDB 2025-10-08 16:15:41.982606 | orchestrator | 2025-10-08 16:15:41.982618 | orchestrator | + echo 2025-10-08 16:15:41.982629 | orchestrator | + MARIADB_USER=root_shard_0 2025-10-08 16:15:41.982641 | orchestrator | + bash nagios-plugins/check_galera_cluster -u root_shard_0 -p password -H api-int.testbed.osism.xyz -c 1 2025-10-08 16:15:42.044200 | orchestrator | Reading package lists... 2025-10-08 16:15:42.381882 | orchestrator | Building dependency tree... 2025-10-08 16:15:42.382301 | orchestrator | Reading state information... 2025-10-08 16:15:42.808128 | orchestrator | bc is already the newest version (1.07.1-3ubuntu4). 2025-10-08 16:15:42.808230 | orchestrator | bc set to manually installed. 2025-10-08 16:15:42.808246 | orchestrator | 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 2025-10-08 16:15:43.399674 | orchestrator | OK: number of NODES = 3 (wsrep_cluster_size) 2025-10-08 16:15:43.400400 | orchestrator | 2025-10-08 16:15:43.400434 | orchestrator | # Status of Prometheus 2025-10-08 16:15:43.400448 | orchestrator | 2025-10-08 16:15:43.400461 | orchestrator | + echo 2025-10-08 16:15:43.400474 | orchestrator | + echo '# Status of Prometheus' 2025-10-08 16:15:43.400488 | orchestrator | + echo 2025-10-08 16:15:43.400501 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/healthy 2025-10-08 16:15:43.471877 | orchestrator | Unauthorized 2025-10-08 16:15:43.476509 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/ready 2025-10-08 16:15:43.546719 | orchestrator | Unauthorized 2025-10-08 16:15:43.550689 | orchestrator | 2025-10-08 16:15:43.550723 | orchestrator | # Status of RabbitMQ 2025-10-08 16:15:43.550737 | orchestrator | 2025-10-08 16:15:43.550749 | orchestrator | + echo 2025-10-08 16:15:43.550760 | orchestrator | + echo '# Status of RabbitMQ' 2025-10-08 16:15:43.550772 | orchestrator | + echo 2025-10-08 16:15:43.550784 | orchestrator | + perl nagios-plugins/check_rabbitmq_cluster --ssl 1 -H api-int.testbed.osism.xyz -u openstack -p password 2025-10-08 16:15:44.071792 | orchestrator | RABBITMQ_CLUSTER OK - nb_running_node OK (3) nb_running_disc_node OK (3) nb_running_ram_node OK (0) 2025-10-08 16:15:44.080897 | orchestrator | 2025-10-08 16:15:44.080947 | orchestrator | # Status of Redis 2025-10-08 16:15:44.080960 | orchestrator | 2025-10-08 16:15:44.080972 | orchestrator | + echo 2025-10-08 16:15:44.080984 | orchestrator | + echo '# Status of Redis' 2025-10-08 16:15:44.080996 | orchestrator | + echo 2025-10-08 16:15:44.081009 | orchestrator | + /usr/lib/nagios/plugins/check_tcp -H 192.168.16.10 -p 6379 -A -E -s 'AUTH QHNA1SZRlOKzLADhUd5ZDgpHfQe6dNfr3bwEdY24\r\nPING\r\nINFO replication\r\nQUIT\r\n' -e PONG -e role:master -e slave0:ip=192.168.16.1 -e,port=6379 -j 2025-10-08 16:15:44.085418 | orchestrator | TCP OK - 0.001 second response time on 192.168.16.10 port 6379|time=0.001480s;;;0.000000;10.000000 2025-10-08 16:15:44.085677 | orchestrator | + popd 2025-10-08 16:15:44.085993 | orchestrator | 2025-10-08 16:15:44.086013 | orchestrator | + echo 2025-10-08 16:15:44.086191 | orchestrator | # Create backup of MariaDB database 2025-10-08 16:15:44.086205 | orchestrator | 2025-10-08 16:15:44.086216 | orchestrator | + echo '# Create backup of MariaDB database' 2025-10-08 16:15:44.086227 | orchestrator | + echo 2025-10-08 16:15:44.086238 | orchestrator | + osism apply mariadb_backup -e mariadb_backup_type=full 2025-10-08 16:15:46.228378 | orchestrator | 2025-10-08 16:15:46 | INFO  | Task 99904c5c-c2b4-4e2a-82a6-2a2c993a9fa5 (mariadb_backup) was prepared for execution. 2025-10-08 16:15:46.228475 | orchestrator | 2025-10-08 16:15:46 | INFO  | It takes a moment until task 99904c5c-c2b4-4e2a-82a6-2a2c993a9fa5 (mariadb_backup) has been started and output is visible here. 2025-10-08 16:16:14.389325 | orchestrator | 2025-10-08 16:16:14.389420 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-10-08 16:16:14.389432 | orchestrator | 2025-10-08 16:16:14.389441 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-10-08 16:16:14.389449 | orchestrator | Wednesday 08 October 2025 16:15:50 +0000 (0:00:00.186) 0:00:00.186 ***** 2025-10-08 16:16:14.389457 | orchestrator | ok: [testbed-node-0] 2025-10-08 16:16:14.389465 | orchestrator | ok: [testbed-node-1] 2025-10-08 16:16:14.389473 | orchestrator | ok: [testbed-node-2] 2025-10-08 16:16:14.389480 | orchestrator | 2025-10-08 16:16:14.389488 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-10-08 16:16:14.389495 | orchestrator | Wednesday 08 October 2025 16:15:50 +0000 (0:00:00.339) 0:00:00.525 ***** 2025-10-08 16:16:14.389503 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-10-08 16:16:14.389512 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-10-08 16:16:14.389520 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-10-08 16:16:14.389529 | orchestrator | 2025-10-08 16:16:14.389538 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-10-08 16:16:14.389547 | orchestrator | 2025-10-08 16:16:14.389556 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-10-08 16:16:14.389564 | orchestrator | Wednesday 08 October 2025 16:15:51 +0000 (0:00:00.618) 0:00:01.144 ***** 2025-10-08 16:16:14.389574 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-10-08 16:16:14.389583 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-10-08 16:16:14.389592 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-10-08 16:16:14.389601 | orchestrator | 2025-10-08 16:16:14.389610 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-10-08 16:16:14.389619 | orchestrator | Wednesday 08 October 2025 16:15:51 +0000 (0:00:00.389) 0:00:01.533 ***** 2025-10-08 16:16:14.389628 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-08 16:16:14.389638 | orchestrator | 2025-10-08 16:16:14.389646 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2025-10-08 16:16:14.389655 | orchestrator | Wednesday 08 October 2025 16:15:52 +0000 (0:00:00.543) 0:00:02.077 ***** 2025-10-08 16:16:14.389664 | orchestrator | ok: [testbed-node-0] 2025-10-08 16:16:14.389673 | orchestrator | ok: [testbed-node-2] 2025-10-08 16:16:14.389682 | orchestrator | ok: [testbed-node-1] 2025-10-08 16:16:14.389690 | orchestrator | 2025-10-08 16:16:14.389700 | orchestrator | TASK [mariadb : Taking full database backup via Mariabackup] ******************* 2025-10-08 16:16:14.389709 | orchestrator | Wednesday 08 October 2025 16:15:55 +0000 (0:00:03.239) 0:00:05.317 ***** 2025-10-08 16:16:14.389718 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-10-08 16:16:14.389727 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2025-10-08 16:16:14.389755 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-10-08 16:16:14.389765 | orchestrator | mariadb_bootstrap_restart 2025-10-08 16:16:14.389774 | orchestrator | skipping: [testbed-node-1] 2025-10-08 16:16:14.389782 | orchestrator | skipping: [testbed-node-2] 2025-10-08 16:16:14.389791 | orchestrator | changed: [testbed-node-0] 2025-10-08 16:16:14.389843 | orchestrator | 2025-10-08 16:16:14.389852 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-10-08 16:16:14.389861 | orchestrator | skipping: no hosts matched 2025-10-08 16:16:14.389870 | orchestrator | 2025-10-08 16:16:14.389879 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-10-08 16:16:14.389888 | orchestrator | skipping: no hosts matched 2025-10-08 16:16:14.389896 | orchestrator | 2025-10-08 16:16:14.389905 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-10-08 16:16:14.389914 | orchestrator | skipping: no hosts matched 2025-10-08 16:16:14.389923 | orchestrator | 2025-10-08 16:16:14.389932 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-10-08 16:16:14.389941 | orchestrator | 2025-10-08 16:16:14.389949 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-10-08 16:16:14.389958 | orchestrator | Wednesday 08 October 2025 16:16:13 +0000 (0:00:17.722) 0:00:23.039 ***** 2025-10-08 16:16:14.389967 | orchestrator | skipping: [testbed-node-0] 2025-10-08 16:16:14.389976 | orchestrator | skipping: [testbed-node-1] 2025-10-08 16:16:14.389984 | orchestrator | skipping: [testbed-node-2] 2025-10-08 16:16:14.389993 | orchestrator | 2025-10-08 16:16:14.390002 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-10-08 16:16:14.390063 | orchestrator | Wednesday 08 October 2025 16:16:13 +0000 (0:00:00.301) 0:00:23.341 ***** 2025-10-08 16:16:14.390075 | orchestrator | skipping: [testbed-node-0] 2025-10-08 16:16:14.390084 | orchestrator | skipping: [testbed-node-1] 2025-10-08 16:16:14.390093 | orchestrator | skipping: [testbed-node-2] 2025-10-08 16:16:14.390102 | orchestrator | 2025-10-08 16:16:14.390110 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-08 16:16:14.390120 | orchestrator | testbed-node-0 : ok=6  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-08 16:16:14.390130 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-10-08 16:16:14.390139 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-10-08 16:16:14.390190 | orchestrator | 2025-10-08 16:16:14.390200 | orchestrator | 2025-10-08 16:16:14.390209 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-08 16:16:14.390218 | orchestrator | Wednesday 08 October 2025 16:16:14 +0000 (0:00:00.421) 0:00:23.762 ***** 2025-10-08 16:16:14.390227 | orchestrator | =============================================================================== 2025-10-08 16:16:14.390236 | orchestrator | mariadb : Taking full database backup via Mariabackup ------------------ 17.72s 2025-10-08 16:16:14.390261 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 3.24s 2025-10-08 16:16:14.390271 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.62s 2025-10-08 16:16:14.390280 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.54s 2025-10-08 16:16:14.390288 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 0.42s 2025-10-08 16:16:14.390297 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 0.39s 2025-10-08 16:16:14.390306 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.34s 2025-10-08 16:16:14.390314 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 0.30s 2025-10-08 16:16:14.713772 | orchestrator | + sh -c /opt/configuration/scripts/check/300-openstack.sh 2025-10-08 16:16:14.721667 | orchestrator | + set -e 2025-10-08 16:16:14.721696 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-10-08 16:16:14.721708 | orchestrator | ++ export INTERACTIVE=false 2025-10-08 16:16:14.721719 | orchestrator | ++ INTERACTIVE=false 2025-10-08 16:16:14.721728 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-10-08 16:16:14.721738 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-10-08 16:16:14.721752 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-10-08 16:16:14.723587 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-10-08 16:16:14.730515 | orchestrator | 2025-10-08 16:16:14.730555 | orchestrator | # OpenStack endpoints 2025-10-08 16:16:14.730569 | orchestrator | 2025-10-08 16:16:14.730581 | orchestrator | ++ export MANAGER_VERSION=latest 2025-10-08 16:16:14.730592 | orchestrator | ++ MANAGER_VERSION=latest 2025-10-08 16:16:14.730603 | orchestrator | + export OS_CLOUD=admin 2025-10-08 16:16:14.730614 | orchestrator | + OS_CLOUD=admin 2025-10-08 16:16:14.730625 | orchestrator | + echo 2025-10-08 16:16:14.730636 | orchestrator | + echo '# OpenStack endpoints' 2025-10-08 16:16:14.730646 | orchestrator | + echo 2025-10-08 16:16:14.730658 | orchestrator | + openstack endpoint list 2025-10-08 16:16:18.543071 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-10-08 16:16:18.543168 | orchestrator | | ID | Region | Service Name | Service Type | Enabled | Interface | URL | 2025-10-08 16:16:18.543183 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-10-08 16:16:18.543212 | orchestrator | | 0024198353dd48e5813421beda6e5097 | RegionOne | nova | compute | True | internal | https://api-int.testbed.osism.xyz:8774/v2.1 | 2025-10-08 16:16:18.543224 | orchestrator | | 09c6faf3eff74f9183bbda41ea136ce4 | RegionOne | glance | image | True | public | https://api.testbed.osism.xyz:9292 | 2025-10-08 16:16:18.543235 | orchestrator | | 16eced3166aa40e497bd2dabb67c44d3 | RegionOne | octavia | load-balancer | True | internal | https://api-int.testbed.osism.xyz:9876 | 2025-10-08 16:16:18.543246 | orchestrator | | 179647e1f0d0466e923877e07968d821 | RegionOne | designate | dns | True | public | https://api.testbed.osism.xyz:9001 | 2025-10-08 16:16:18.543257 | orchestrator | | 2413617426634e0ba70b8e27aa9e799a | RegionOne | barbican | key-manager | True | internal | https://api-int.testbed.osism.xyz:9311 | 2025-10-08 16:16:18.543268 | orchestrator | | 38c6deba8bc448f19d513958bce07654 | RegionOne | designate | dns | True | internal | https://api-int.testbed.osism.xyz:9001 | 2025-10-08 16:16:18.543279 | orchestrator | | 4709f173d18e4470b2e8f1e1c88b5616 | RegionOne | cinderv3 | volumev3 | True | public | https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2025-10-08 16:16:18.543289 | orchestrator | | 48455a5d98884730b8e334f9abb11828 | RegionOne | neutron | network | True | public | https://api.testbed.osism.xyz:9696 | 2025-10-08 16:16:18.543300 | orchestrator | | 619b1475664741db83ed5d33d981e646 | RegionOne | cinderv3 | volumev3 | True | internal | https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2025-10-08 16:16:18.543310 | orchestrator | | 6f030b3496ae44fa97b8d5b4fb5180c5 | RegionOne | magnum | container-infra | True | public | https://api.testbed.osism.xyz:9511/v1 | 2025-10-08 16:16:18.543321 | orchestrator | | 7497928f1ef54803b38ff8584f9ab545 | RegionOne | octavia | load-balancer | True | public | https://api.testbed.osism.xyz:9876 | 2025-10-08 16:16:18.543332 | orchestrator | | 77395e2be2294b4098b73c7b76b31c98 | RegionOne | keystone | identity | True | internal | https://api-int.testbed.osism.xyz:5000 | 2025-10-08 16:16:18.543365 | orchestrator | | 7f3ec6d9fc2f4dff9b05cda7a2aa6ece | RegionOne | placement | placement | True | internal | https://api-int.testbed.osism.xyz:8780 | 2025-10-08 16:16:18.543376 | orchestrator | | 80e3b13e3431417381e49fc772e0c91e | RegionOne | glance | image | True | internal | https://api-int.testbed.osism.xyz:9292 | 2025-10-08 16:16:18.543387 | orchestrator | | 88c9201bbd8f4d60892a22e1142783bd | RegionOne | magnum | container-infra | True | internal | https://api-int.testbed.osism.xyz:9511/v1 | 2025-10-08 16:16:18.543398 | orchestrator | | b72f3d1279f14acdb6736ef014ec1270 | RegionOne | nova | compute | True | public | https://api.testbed.osism.xyz:8774/v2.1 | 2025-10-08 16:16:18.543408 | orchestrator | | d6cc2312feb74e568f248f059b9dd279 | RegionOne | keystone | identity | True | public | https://api.testbed.osism.xyz:5000 | 2025-10-08 16:16:18.543419 | orchestrator | | dc514455408c4e87b9e2cb824a52492e | RegionOne | neutron | network | True | internal | https://api-int.testbed.osism.xyz:9696 | 2025-10-08 16:16:18.543430 | orchestrator | | e541c477ba84416d9565a863680021ad | RegionOne | swift | object-store | True | internal | https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2025-10-08 16:16:18.543440 | orchestrator | | e71d1bbe51854d74bfbfce0a4362a914 | RegionOne | placement | placement | True | public | https://api.testbed.osism.xyz:8780 | 2025-10-08 16:16:18.543470 | orchestrator | | ee35fa2312db4193bf94943a20a68148 | RegionOne | swift | object-store | True | public | https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2025-10-08 16:16:18.543481 | orchestrator | | f474044ca1d54fef8ac758fc04d70bc7 | RegionOne | barbican | key-manager | True | public | https://api.testbed.osism.xyz:9311 | 2025-10-08 16:16:18.543492 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-10-08 16:16:18.805201 | orchestrator | 2025-10-08 16:16:18.805239 | orchestrator | # Cinder 2025-10-08 16:16:18.805252 | orchestrator | 2025-10-08 16:16:18.805263 | orchestrator | + echo 2025-10-08 16:16:18.805274 | orchestrator | + echo '# Cinder' 2025-10-08 16:16:18.805286 | orchestrator | + echo 2025-10-08 16:16:18.805297 | orchestrator | + openstack volume service list 2025-10-08 16:16:21.506897 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-10-08 16:16:21.506987 | orchestrator | | Binary | Host | Zone | Status | State | Updated At | 2025-10-08 16:16:21.507000 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-10-08 16:16:21.507011 | orchestrator | | cinder-scheduler | testbed-node-0 | internal | enabled | up | 2025-10-08T16:16:15.000000 | 2025-10-08 16:16:21.507022 | orchestrator | | cinder-scheduler | testbed-node-2 | internal | enabled | up | 2025-10-08T16:16:17.000000 | 2025-10-08 16:16:21.507033 | orchestrator | | cinder-scheduler | testbed-node-1 | internal | enabled | up | 2025-10-08T16:16:18.000000 | 2025-10-08 16:16:21.507043 | orchestrator | | cinder-volume | testbed-node-3@rbd-volumes | nova | enabled | up | 2025-10-08T16:16:15.000000 | 2025-10-08 16:16:21.507054 | orchestrator | | cinder-volume | testbed-node-5@rbd-volumes | nova | enabled | up | 2025-10-08T16:16:15.000000 | 2025-10-08 16:16:21.507064 | orchestrator | | cinder-volume | testbed-node-4@rbd-volumes | nova | enabled | up | 2025-10-08T16:16:16.000000 | 2025-10-08 16:16:21.507075 | orchestrator | | cinder-backup | testbed-node-3 | nova | enabled | up | 2025-10-08T16:16:13.000000 | 2025-10-08 16:16:21.507085 | orchestrator | | cinder-backup | testbed-node-5 | nova | enabled | up | 2025-10-08T16:16:13.000000 | 2025-10-08 16:16:21.507096 | orchestrator | | cinder-backup | testbed-node-4 | nova | enabled | up | 2025-10-08T16:16:13.000000 | 2025-10-08 16:16:21.507130 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-10-08 16:16:21.778071 | orchestrator | 2025-10-08 16:16:21.778155 | orchestrator | # Neutron 2025-10-08 16:16:21.778166 | orchestrator | 2025-10-08 16:16:21.778175 | orchestrator | + echo 2025-10-08 16:16:21.778183 | orchestrator | + echo '# Neutron' 2025-10-08 16:16:21.778192 | orchestrator | + echo 2025-10-08 16:16:21.778200 | orchestrator | + openstack network agent list 2025-10-08 16:16:24.696118 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-10-08 16:16:24.696220 | orchestrator | | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | 2025-10-08 16:16:24.696235 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-10-08 16:16:24.696247 | orchestrator | | testbed-node-1 | OVN Controller Gateway agent | testbed-node-1 | nova | :-) | UP | ovn-controller | 2025-10-08 16:16:24.696258 | orchestrator | | testbed-node-5 | OVN Controller agent | testbed-node-5 | | :-) | UP | ovn-controller | 2025-10-08 16:16:24.696268 | orchestrator | | testbed-node-2 | OVN Controller Gateway agent | testbed-node-2 | nova | :-) | UP | ovn-controller | 2025-10-08 16:16:24.696279 | orchestrator | | testbed-node-0 | OVN Controller Gateway agent | testbed-node-0 | nova | :-) | UP | ovn-controller | 2025-10-08 16:16:24.696290 | orchestrator | | testbed-node-3 | OVN Controller agent | testbed-node-3 | | :-) | UP | ovn-controller | 2025-10-08 16:16:24.696300 | orchestrator | | testbed-node-4 | OVN Controller agent | testbed-node-4 | | :-) | UP | ovn-controller | 2025-10-08 16:16:24.696311 | orchestrator | | e645415a-98f5-5758-8cd1-c47af282b5c0 | OVN Metadata agent | testbed-node-3 | | :-) | UP | neutron-ovn-metadata-agent | 2025-10-08 16:16:24.696321 | orchestrator | | 4939696e-6092-5a33-bb73-b850064684df | OVN Metadata agent | testbed-node-4 | | :-) | UP | neutron-ovn-metadata-agent | 2025-10-08 16:16:24.696332 | orchestrator | | 36b9d21c-9928-5c0a-9b27-73ac7a3e770c | OVN Metadata agent | testbed-node-5 | | :-) | UP | neutron-ovn-metadata-agent | 2025-10-08 16:16:24.696343 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-10-08 16:16:25.023483 | orchestrator | + openstack network service provider list 2025-10-08 16:16:27.960255 | orchestrator | +---------------+------+---------+ 2025-10-08 16:16:27.960357 | orchestrator | | Service Type | Name | Default | 2025-10-08 16:16:27.960371 | orchestrator | +---------------+------+---------+ 2025-10-08 16:16:27.960383 | orchestrator | | L3_ROUTER_NAT | ovn | True | 2025-10-08 16:16:27.960394 | orchestrator | +---------------+------+---------+ 2025-10-08 16:16:28.372740 | orchestrator | 2025-10-08 16:16:28.372891 | orchestrator | # Nova 2025-10-08 16:16:28.372909 | orchestrator | 2025-10-08 16:16:28.372922 | orchestrator | + echo 2025-10-08 16:16:28.372933 | orchestrator | + echo '# Nova' 2025-10-08 16:16:28.372945 | orchestrator | + echo 2025-10-08 16:16:28.372957 | orchestrator | + openstack compute service list 2025-10-08 16:16:31.248245 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-10-08 16:16:31.248345 | orchestrator | | ID | Binary | Host | Zone | Status | State | Updated At | 2025-10-08 16:16:31.248380 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-10-08 16:16:31.248393 | orchestrator | | 78ddab3c-89e9-490e-91c2-29d7fa29e401 | nova-scheduler | testbed-node-1 | internal | enabled | up | 2025-10-08T16:16:21.000000 | 2025-10-08 16:16:31.248427 | orchestrator | | 66f4f70f-fc4b-4839-90d9-52a6f95eaaf9 | nova-scheduler | testbed-node-0 | internal | enabled | up | 2025-10-08T16:16:23.000000 | 2025-10-08 16:16:31.248439 | orchestrator | | 091305d8-b678-41b3-a08f-51ce1b3c7aa6 | nova-scheduler | testbed-node-2 | internal | enabled | up | 2025-10-08T16:16:28.000000 | 2025-10-08 16:16:31.248450 | orchestrator | | 5f6b47a0-d607-424e-b3f2-716d7f7b80f1 | nova-conductor | testbed-node-0 | internal | enabled | up | 2025-10-08T16:16:29.000000 | 2025-10-08 16:16:31.248460 | orchestrator | | 4174a58f-bd5b-46c8-9935-9d419febc495 | nova-conductor | testbed-node-1 | internal | enabled | up | 2025-10-08T16:16:29.000000 | 2025-10-08 16:16:31.248471 | orchestrator | | 42bd5d0e-03d6-4fb8-8710-12bd96231745 | nova-conductor | testbed-node-2 | internal | enabled | up | 2025-10-08T16:16:24.000000 | 2025-10-08 16:16:31.248482 | orchestrator | | f3fc5db6-93e5-4e7f-88bb-2d99a58b82f5 | nova-compute | testbed-node-5 | nova | enabled | up | 2025-10-08T16:16:29.000000 | 2025-10-08 16:16:31.248493 | orchestrator | | c2e45fe3-3736-467d-a148-ad43e34bbd96 | nova-compute | testbed-node-3 | nova | enabled | up | 2025-10-08T16:16:29.000000 | 2025-10-08 16:16:31.248504 | orchestrator | | a05ebf7d-f3ed-4b87-b587-8a8002cc0d9d | nova-compute | testbed-node-4 | nova | enabled | up | 2025-10-08T16:16:29.000000 | 2025-10-08 16:16:31.248514 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-10-08 16:16:31.512815 | orchestrator | + openstack hypervisor list 2025-10-08 16:16:34.768012 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-10-08 16:16:34.768161 | orchestrator | | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | 2025-10-08 16:16:34.768189 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-10-08 16:16:34.768208 | orchestrator | | 72264ee6-b110-4339-b784-4097ad266e7f | testbed-node-5 | QEMU | 192.168.16.15 | up | 2025-10-08 16:16:34.768227 | orchestrator | | cf768e6e-4a7e-4bd2-9a17-d6033d927b0c | testbed-node-3 | QEMU | 192.168.16.13 | up | 2025-10-08 16:16:34.768246 | orchestrator | | bb28e351-d210-43c2-814e-88e0cf830092 | testbed-node-4 | QEMU | 192.168.16.14 | up | 2025-10-08 16:16:34.768266 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-10-08 16:16:35.041855 | orchestrator | 2025-10-08 16:16:35.041944 | orchestrator | # Run OpenStack test play 2025-10-08 16:16:35.041960 | orchestrator | 2025-10-08 16:16:35.041972 | orchestrator | + echo 2025-10-08 16:16:35.041984 | orchestrator | + echo '# Run OpenStack test play' 2025-10-08 16:16:35.041997 | orchestrator | + echo 2025-10-08 16:16:35.042008 | orchestrator | + osism apply --environment openstack test 2025-10-08 16:16:37.030619 | orchestrator | 2025-10-08 16:16:37 | INFO  | Trying to run play test in environment openstack 2025-10-08 16:16:47.185627 | orchestrator | 2025-10-08 16:16:47 | INFO  | Task d0b3ec53-731a-4904-bb8c-a61b62e0c812 (test) was prepared for execution. 2025-10-08 16:16:47.185703 | orchestrator | 2025-10-08 16:16:47 | INFO  | It takes a moment until task d0b3ec53-731a-4904-bb8c-a61b62e0c812 (test) has been started and output is visible here. 2025-10-08 16:23:46.482279 | orchestrator | 2025-10-08 16:23:46.482379 | orchestrator | PLAY [Create test project] ***************************************************** 2025-10-08 16:23:46.482395 | orchestrator | 2025-10-08 16:23:46.482408 | orchestrator | TASK [Create test domain] ****************************************************** 2025-10-08 16:23:46.482458 | orchestrator | Wednesday 08 October 2025 16:16:51 +0000 (0:00:00.074) 0:00:00.074 ***** 2025-10-08 16:23:46.482471 | orchestrator | changed: [localhost] 2025-10-08 16:23:46.482483 | orchestrator | 2025-10-08 16:23:46.482494 | orchestrator | TASK [Create test-admin user] ************************************************** 2025-10-08 16:23:46.482506 | orchestrator | Wednesday 08 October 2025 16:16:55 +0000 (0:00:03.747) 0:00:03.822 ***** 2025-10-08 16:23:46.482516 | orchestrator | changed: [localhost] 2025-10-08 16:23:46.482552 | orchestrator | 2025-10-08 16:23:46.482601 | orchestrator | TASK [Add manager role to user test-admin] ************************************* 2025-10-08 16:23:46.482614 | orchestrator | Wednesday 08 October 2025 16:16:59 +0000 (0:00:04.220) 0:00:08.042 ***** 2025-10-08 16:23:46.482625 | orchestrator | changed: [localhost] 2025-10-08 16:23:46.482636 | orchestrator | 2025-10-08 16:23:46.482647 | orchestrator | TASK [Create test project] ***************************************************** 2025-10-08 16:23:46.482657 | orchestrator | Wednesday 08 October 2025 16:17:05 +0000 (0:00:06.466) 0:00:14.508 ***** 2025-10-08 16:23:46.482668 | orchestrator | changed: [localhost] 2025-10-08 16:23:46.482679 | orchestrator | 2025-10-08 16:23:46.482690 | orchestrator | TASK [Create test user] ******************************************************** 2025-10-08 16:23:46.482701 | orchestrator | Wednesday 08 October 2025 16:17:09 +0000 (0:00:04.091) 0:00:18.600 ***** 2025-10-08 16:23:46.482711 | orchestrator | changed: [localhost] 2025-10-08 16:23:46.482722 | orchestrator | 2025-10-08 16:23:46.482733 | orchestrator | TASK [Add member roles to user test] ******************************************* 2025-10-08 16:23:46.482744 | orchestrator | Wednesday 08 October 2025 16:17:14 +0000 (0:00:04.247) 0:00:22.848 ***** 2025-10-08 16:23:46.482755 | orchestrator | changed: [localhost] => (item=load-balancer_member) 2025-10-08 16:23:46.482767 | orchestrator | changed: [localhost] => (item=member) 2025-10-08 16:23:46.482779 | orchestrator | changed: [localhost] => (item=creator) 2025-10-08 16:23:46.482790 | orchestrator | 2025-10-08 16:23:46.482801 | orchestrator | TASK [Create test server group] ************************************************ 2025-10-08 16:23:46.482813 | orchestrator | Wednesday 08 October 2025 16:17:26 +0000 (0:00:12.101) 0:00:34.949 ***** 2025-10-08 16:23:46.482826 | orchestrator | changed: [localhost] 2025-10-08 16:23:46.482838 | orchestrator | 2025-10-08 16:23:46.482851 | orchestrator | TASK [Create ssh security group] *********************************************** 2025-10-08 16:23:46.482863 | orchestrator | Wednesday 08 October 2025 16:17:30 +0000 (0:00:04.385) 0:00:39.335 ***** 2025-10-08 16:23:46.482876 | orchestrator | changed: [localhost] 2025-10-08 16:23:46.482888 | orchestrator | 2025-10-08 16:23:46.482900 | orchestrator | TASK [Add rule to ssh security group] ****************************************** 2025-10-08 16:23:46.482913 | orchestrator | Wednesday 08 October 2025 16:17:35 +0000 (0:00:04.750) 0:00:44.086 ***** 2025-10-08 16:23:46.482925 | orchestrator | changed: [localhost] 2025-10-08 16:23:46.482936 | orchestrator | 2025-10-08 16:23:46.482949 | orchestrator | TASK [Create icmp security group] ********************************************** 2025-10-08 16:23:46.482960 | orchestrator | Wednesday 08 October 2025 16:17:39 +0000 (0:00:04.310) 0:00:48.397 ***** 2025-10-08 16:23:46.482973 | orchestrator | changed: [localhost] 2025-10-08 16:23:46.482985 | orchestrator | 2025-10-08 16:23:46.482997 | orchestrator | TASK [Add rule to icmp security group] ***************************************** 2025-10-08 16:23:46.483010 | orchestrator | Wednesday 08 October 2025 16:17:43 +0000 (0:00:04.069) 0:00:52.466 ***** 2025-10-08 16:23:46.483022 | orchestrator | changed: [localhost] 2025-10-08 16:23:46.483034 | orchestrator | 2025-10-08 16:23:46.483046 | orchestrator | TASK [Create test keypair] ***************************************************** 2025-10-08 16:23:46.483058 | orchestrator | Wednesday 08 October 2025 16:17:47 +0000 (0:00:04.198) 0:00:56.665 ***** 2025-10-08 16:23:46.483070 | orchestrator | changed: [localhost] 2025-10-08 16:23:46.483082 | orchestrator | 2025-10-08 16:23:46.483095 | orchestrator | TASK [Create test network topology] ******************************************** 2025-10-08 16:23:46.483107 | orchestrator | Wednesday 08 October 2025 16:17:52 +0000 (0:00:04.120) 0:01:00.785 ***** 2025-10-08 16:23:46.483119 | orchestrator | changed: [localhost] 2025-10-08 16:23:46.483132 | orchestrator | 2025-10-08 16:23:46.483144 | orchestrator | TASK [Create test instances] *************************************************** 2025-10-08 16:23:46.483157 | orchestrator | Wednesday 08 October 2025 16:18:06 +0000 (0:00:14.789) 0:01:15.575 ***** 2025-10-08 16:23:46.483169 | orchestrator | changed: [localhost] => (item=test) 2025-10-08 16:23:46.483182 | orchestrator | changed: [localhost] => (item=test-1) 2025-10-08 16:23:46.483193 | orchestrator | 2025-10-08 16:23:46.483204 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-10-08 16:23:46.483222 | orchestrator | 2025-10-08 16:23:46.483233 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-10-08 16:23:46.483244 | orchestrator | changed: [localhost] => (item=test-2) 2025-10-08 16:23:46.483255 | orchestrator | 2025-10-08 16:23:46.483265 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-10-08 16:23:46.483276 | orchestrator | changed: [localhost] => (item=test-3) 2025-10-08 16:23:46.483287 | orchestrator | 2025-10-08 16:23:46.483298 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-10-08 16:23:46.483308 | orchestrator | 2025-10-08 16:23:46.483319 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-10-08 16:23:46.483330 | orchestrator | changed: [localhost] => (item=test-4) 2025-10-08 16:23:46.483341 | orchestrator | 2025-10-08 16:23:46.483351 | orchestrator | TASK [Add metadata to instances] *********************************************** 2025-10-08 16:23:46.483377 | orchestrator | Wednesday 08 October 2025 16:22:23 +0000 (0:04:16.284) 0:05:31.860 ***** 2025-10-08 16:23:46.483389 | orchestrator | changed: [localhost] => (item=test) 2025-10-08 16:23:46.483404 | orchestrator | changed: [localhost] => (item=test-1) 2025-10-08 16:23:46.483432 | orchestrator | changed: [localhost] => (item=test-2) 2025-10-08 16:23:46.483444 | orchestrator | changed: [localhost] => (item=test-3) 2025-10-08 16:23:46.483455 | orchestrator | changed: [localhost] => (item=test-4) 2025-10-08 16:23:46.483466 | orchestrator | 2025-10-08 16:23:46.483477 | orchestrator | TASK [Add tag to instances] **************************************************** 2025-10-08 16:23:46.483506 | orchestrator | Wednesday 08 October 2025 16:22:47 +0000 (0:00:23.881) 0:05:55.741 ***** 2025-10-08 16:23:46.483518 | orchestrator | changed: [localhost] => (item=test) 2025-10-08 16:23:46.483530 | orchestrator | changed: [localhost] => (item=test-1) 2025-10-08 16:23:46.483541 | orchestrator | changed: [localhost] => (item=test-2) 2025-10-08 16:23:46.483551 | orchestrator | changed: [localhost] => (item=test-3) 2025-10-08 16:23:46.483562 | orchestrator | changed: [localhost] => (item=test-4) 2025-10-08 16:23:46.483572 | orchestrator | 2025-10-08 16:23:46.483583 | orchestrator | TASK [Create test volume] ****************************************************** 2025-10-08 16:23:46.483594 | orchestrator | Wednesday 08 October 2025 16:23:20 +0000 (0:00:33.923) 0:06:29.665 ***** 2025-10-08 16:23:46.483606 | orchestrator | changed: [localhost] 2025-10-08 16:23:46.483617 | orchestrator | 2025-10-08 16:23:46.483628 | orchestrator | TASK [Attach test volume] ****************************************************** 2025-10-08 16:23:46.483640 | orchestrator | Wednesday 08 October 2025 16:23:27 +0000 (0:00:06.361) 0:06:36.026 ***** 2025-10-08 16:23:46.483651 | orchestrator | changed: [localhost] 2025-10-08 16:23:46.483662 | orchestrator | 2025-10-08 16:23:46.483673 | orchestrator | TASK [Create floating ip address] ********************************************** 2025-10-08 16:23:46.483685 | orchestrator | Wednesday 08 October 2025 16:23:40 +0000 (0:00:13.427) 0:06:49.453 ***** 2025-10-08 16:23:46.483696 | orchestrator | ok: [localhost] 2025-10-08 16:23:46.483708 | orchestrator | 2025-10-08 16:23:46.483720 | orchestrator | TASK [Print floating ip address] *********************************************** 2025-10-08 16:23:46.483731 | orchestrator | Wednesday 08 October 2025 16:23:46 +0000 (0:00:05.376) 0:06:54.829 ***** 2025-10-08 16:23:46.483742 | orchestrator | ok: [localhost] => { 2025-10-08 16:23:46.483754 | orchestrator |  "msg": "192.168.112.141" 2025-10-08 16:23:46.483765 | orchestrator | } 2025-10-08 16:23:46.483777 | orchestrator | 2025-10-08 16:23:46.483788 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-08 16:23:46.483800 | orchestrator | localhost : ok=20  changed=18  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-08 16:23:46.483813 | orchestrator | 2025-10-08 16:23:46.483824 | orchestrator | 2025-10-08 16:23:46.483836 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-08 16:23:46.483852 | orchestrator | Wednesday 08 October 2025 16:23:46 +0000 (0:00:00.050) 0:06:54.880 ***** 2025-10-08 16:23:46.483870 | orchestrator | =============================================================================== 2025-10-08 16:23:46.483881 | orchestrator | Create test instances ------------------------------------------------- 256.28s 2025-10-08 16:23:46.483891 | orchestrator | Add tag to instances --------------------------------------------------- 33.92s 2025-10-08 16:23:46.483902 | orchestrator | Add metadata to instances ---------------------------------------------- 23.88s 2025-10-08 16:23:46.483913 | orchestrator | Create test network topology ------------------------------------------- 14.79s 2025-10-08 16:23:46.483923 | orchestrator | Attach test volume ----------------------------------------------------- 13.43s 2025-10-08 16:23:46.483934 | orchestrator | Add member roles to user test ------------------------------------------ 12.10s 2025-10-08 16:23:46.483944 | orchestrator | Add manager role to user test-admin ------------------------------------- 6.47s 2025-10-08 16:23:46.483955 | orchestrator | Create test volume ------------------------------------------------------ 6.36s 2025-10-08 16:23:46.483965 | orchestrator | Create floating ip address ---------------------------------------------- 5.38s 2025-10-08 16:23:46.483976 | orchestrator | Create ssh security group ----------------------------------------------- 4.75s 2025-10-08 16:23:46.483987 | orchestrator | Create test server group ------------------------------------------------ 4.39s 2025-10-08 16:23:46.483997 | orchestrator | Add rule to ssh security group ------------------------------------------ 4.31s 2025-10-08 16:23:46.484008 | orchestrator | Create test user -------------------------------------------------------- 4.25s 2025-10-08 16:23:46.484019 | orchestrator | Create test-admin user -------------------------------------------------- 4.22s 2025-10-08 16:23:46.484029 | orchestrator | Add rule to icmp security group ----------------------------------------- 4.20s 2025-10-08 16:23:46.484040 | orchestrator | Create test keypair ----------------------------------------------------- 4.12s 2025-10-08 16:23:46.484050 | orchestrator | Create test project ----------------------------------------------------- 4.09s 2025-10-08 16:23:46.484061 | orchestrator | Create icmp security group ---------------------------------------------- 4.07s 2025-10-08 16:23:46.484072 | orchestrator | Create test domain ------------------------------------------------------ 3.75s 2025-10-08 16:23:46.484083 | orchestrator | Print floating ip address ----------------------------------------------- 0.05s 2025-10-08 16:23:46.800996 | orchestrator | + server_list 2025-10-08 16:23:46.801083 | orchestrator | + openstack --os-cloud test server list 2025-10-08 16:23:50.877023 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------------------+----------+ 2025-10-08 16:23:50.877120 | orchestrator | | ID | Name | Status | Networks | Image | Flavor | 2025-10-08 16:23:50.877134 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------------------+----------+ 2025-10-08 16:23:50.877146 | orchestrator | | bbe73cf0-65ae-41a2-96b9-a4c86fafb1dd | test-4 | ACTIVE | auto_allocated_network=10.42.0.30, 192.168.112.191 | N/A (booted from volume) | SCS-1L-1 | 2025-10-08 16:23:50.877158 | orchestrator | | cb6ce180-ace5-436a-988e-28ea996765a2 | test-3 | ACTIVE | auto_allocated_network=10.42.0.7, 192.168.112.108 | N/A (booted from volume) | SCS-1L-1 | 2025-10-08 16:23:50.877170 | orchestrator | | 3d7cdcca-58e4-4d03-834b-5d76740034ec | test-2 | ACTIVE | auto_allocated_network=10.42.0.34, 192.168.112.101 | N/A (booted from volume) | SCS-1L-1 | 2025-10-08 16:23:50.877180 | orchestrator | | 8f716a21-acc2-4fea-a186-6c273630b28a | test-1 | ACTIVE | auto_allocated_network=10.42.0.36, 192.168.112.117 | N/A (booted from volume) | SCS-1L-1 | 2025-10-08 16:23:50.877191 | orchestrator | | 08276b55-01a6-4764-84a1-ada18d59ff0d | test | ACTIVE | auto_allocated_network=10.42.0.27, 192.168.112.141 | N/A (booted from volume) | SCS-1L-1 | 2025-10-08 16:23:50.877202 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------------------+----------+ 2025-10-08 16:23:51.176957 | orchestrator | + openstack --os-cloud test server show test 2025-10-08 16:23:54.551115 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-10-08 16:23:54.551223 | orchestrator | | Field | Value | 2025-10-08 16:23:54.551243 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-10-08 16:23:54.551256 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-10-08 16:23:54.551267 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-10-08 16:23:54.551279 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-10-08 16:23:54.551290 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test | 2025-10-08 16:23:54.551301 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-10-08 16:23:54.551313 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-10-08 16:23:54.551340 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-10-08 16:23:54.551371 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-10-08 16:23:54.551383 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-10-08 16:23:54.551399 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-10-08 16:23:54.551437 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-10-08 16:23:54.551449 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-10-08 16:23:54.551461 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-10-08 16:23:54.551472 | orchestrator | | OS-EXT-STS:task_state | None | 2025-10-08 16:23:54.551483 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-10-08 16:23:54.551494 | orchestrator | | OS-SRV-USG:launched_at | 2025-10-08T16:18:50.000000 | 2025-10-08 16:23:54.551521 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-10-08 16:23:54.551533 | orchestrator | | accessIPv4 | | 2025-10-08 16:23:54.551549 | orchestrator | | accessIPv6 | | 2025-10-08 16:23:54.551561 | orchestrator | | addresses | auto_allocated_network=10.42.0.27, 192.168.112.141 | 2025-10-08 16:23:54.551572 | orchestrator | | config_drive | | 2025-10-08 16:23:54.551583 | orchestrator | | created | 2025-10-08T16:18:15Z | 2025-10-08 16:23:54.551595 | orchestrator | | description | None | 2025-10-08 16:23:54.551608 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-10-08 16:23:54.551621 | orchestrator | | hostId | 0aff4c24ac7cc7d2191440378a7bea21d0e35c1caf7b5cb4d260d29f | 2025-10-08 16:23:54.551640 | orchestrator | | host_status | None | 2025-10-08 16:23:54.551661 | orchestrator | | id | 08276b55-01a6-4764-84a1-ada18d59ff0d | 2025-10-08 16:23:54.551675 | orchestrator | | image | N/A (booted from volume) | 2025-10-08 16:23:54.551688 | orchestrator | | key_name | test | 2025-10-08 16:23:54.551701 | orchestrator | | locked | False | 2025-10-08 16:23:54.551715 | orchestrator | | locked_reason | None | 2025-10-08 16:23:54.551734 | orchestrator | | name | test | 2025-10-08 16:23:54.551748 | orchestrator | | pinned_availability_zone | None | 2025-10-08 16:23:54.551761 | orchestrator | | progress | 0 | 2025-10-08 16:23:54.551782 | orchestrator | | project_id | 1bce148884884742a649d9b28191b413 | 2025-10-08 16:23:54.551795 | orchestrator | | properties | hostname='test' | 2025-10-08 16:23:54.551816 | orchestrator | | security_groups | name='ssh' | 2025-10-08 16:23:54.551828 | orchestrator | | | name='icmp' | 2025-10-08 16:23:54.551844 | orchestrator | | server_groups | None | 2025-10-08 16:23:54.551856 | orchestrator | | status | ACTIVE | 2025-10-08 16:23:54.551867 | orchestrator | | tags | test | 2025-10-08 16:23:54.551879 | orchestrator | | trusted_image_certificates | None | 2025-10-08 16:23:54.551891 | orchestrator | | updated | 2025-10-08T16:22:28Z | 2025-10-08 16:23:54.551903 | orchestrator | | user_id | 7b4b3271349148f2b14539982b32ae42 | 2025-10-08 16:23:54.551926 | orchestrator | | volumes_attached | delete_on_termination='True', id='b044bff5-776f-4993-b0f7-97226f1cb742' | 2025-10-08 16:23:54.551937 | orchestrator | | | delete_on_termination='False', id='330fd1e9-fbee-45b0-800d-f3a921918a6a' | 2025-10-08 16:23:54.554319 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-10-08 16:23:54.830492 | orchestrator | + openstack --os-cloud test server show test-1 2025-10-08 16:23:57.968099 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-10-08 16:23:57.968204 | orchestrator | | Field | Value | 2025-10-08 16:23:57.968219 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-10-08 16:23:57.968230 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-10-08 16:23:57.968242 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-10-08 16:23:57.968254 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-10-08 16:23:57.968282 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-1 | 2025-10-08 16:23:57.968294 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-10-08 16:23:57.968305 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-10-08 16:23:57.968333 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-10-08 16:23:57.968346 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-10-08 16:23:57.968361 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-10-08 16:23:57.968373 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-10-08 16:23:57.968384 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-10-08 16:23:57.968395 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-10-08 16:23:57.968444 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-10-08 16:23:57.968457 | orchestrator | | OS-EXT-STS:task_state | None | 2025-10-08 16:23:57.968469 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-10-08 16:23:57.968480 | orchestrator | | OS-SRV-USG:launched_at | 2025-10-08T16:19:45.000000 | 2025-10-08 16:23:57.968498 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-10-08 16:23:57.968510 | orchestrator | | accessIPv4 | | 2025-10-08 16:23:57.968525 | orchestrator | | accessIPv6 | | 2025-10-08 16:23:57.968537 | orchestrator | | addresses | auto_allocated_network=10.42.0.36, 192.168.112.117 | 2025-10-08 16:23:57.968548 | orchestrator | | config_drive | | 2025-10-08 16:23:57.968567 | orchestrator | | created | 2025-10-08T16:19:10Z | 2025-10-08 16:23:57.968579 | orchestrator | | description | None | 2025-10-08 16:23:57.968590 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-10-08 16:23:57.968602 | orchestrator | | hostId | 3352e85059a7f54b1935ca175d6cea75d1d7bb08af1ac5780fc1beea | 2025-10-08 16:23:57.968613 | orchestrator | | host_status | None | 2025-10-08 16:23:57.968631 | orchestrator | | id | 8f716a21-acc2-4fea-a186-6c273630b28a | 2025-10-08 16:23:57.968644 | orchestrator | | image | N/A (booted from volume) | 2025-10-08 16:23:57.968657 | orchestrator | | key_name | test | 2025-10-08 16:23:57.968669 | orchestrator | | locked | False | 2025-10-08 16:23:57.968689 | orchestrator | | locked_reason | None | 2025-10-08 16:23:57.968702 | orchestrator | | name | test-1 | 2025-10-08 16:23:57.968715 | orchestrator | | pinned_availability_zone | None | 2025-10-08 16:23:57.968727 | orchestrator | | progress | 0 | 2025-10-08 16:23:57.968746 | orchestrator | | project_id | 1bce148884884742a649d9b28191b413 | 2025-10-08 16:23:57.968760 | orchestrator | | properties | hostname='test-1' | 2025-10-08 16:23:57.968779 | orchestrator | | security_groups | name='ssh' | 2025-10-08 16:23:57.968793 | orchestrator | | | name='icmp' | 2025-10-08 16:23:57.968810 | orchestrator | | server_groups | None | 2025-10-08 16:23:57.968823 | orchestrator | | status | ACTIVE | 2025-10-08 16:23:57.968850 | orchestrator | | tags | test | 2025-10-08 16:23:57.968864 | orchestrator | | trusted_image_certificates | None | 2025-10-08 16:23:57.968877 | orchestrator | | updated | 2025-10-08T16:22:32Z | 2025-10-08 16:23:57.968890 | orchestrator | | user_id | 7b4b3271349148f2b14539982b32ae42 | 2025-10-08 16:23:57.968903 | orchestrator | | volumes_attached | delete_on_termination='True', id='f97058df-fd4e-4666-9bae-999140c364c2' | 2025-10-08 16:23:57.976992 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-10-08 16:23:58.270013 | orchestrator | + openstack --os-cloud test server show test-2 2025-10-08 16:24:01.513759 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-10-08 16:24:01.513866 | orchestrator | | Field | Value | 2025-10-08 16:24:01.513882 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-10-08 16:24:01.513918 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-10-08 16:24:01.513930 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-10-08 16:24:01.513942 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-10-08 16:24:01.513953 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-2 | 2025-10-08 16:24:01.513965 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-10-08 16:24:01.513976 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-10-08 16:24:01.514004 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-10-08 16:24:01.514084 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-10-08 16:24:01.514105 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-10-08 16:24:01.514126 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-10-08 16:24:01.514138 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-10-08 16:24:01.514151 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-10-08 16:24:01.514163 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-10-08 16:24:01.514176 | orchestrator | | OS-EXT-STS:task_state | None | 2025-10-08 16:24:01.514188 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-10-08 16:24:01.514201 | orchestrator | | OS-SRV-USG:launched_at | 2025-10-08T16:20:41.000000 | 2025-10-08 16:24:01.514223 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-10-08 16:24:01.514235 | orchestrator | | accessIPv4 | | 2025-10-08 16:24:01.514259 | orchestrator | | accessIPv6 | | 2025-10-08 16:24:01.514272 | orchestrator | | addresses | auto_allocated_network=10.42.0.34, 192.168.112.101 | 2025-10-08 16:24:01.514284 | orchestrator | | config_drive | | 2025-10-08 16:24:01.514296 | orchestrator | | created | 2025-10-08T16:20:05Z | 2025-10-08 16:24:01.514309 | orchestrator | | description | None | 2025-10-08 16:24:01.514321 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-10-08 16:24:01.514334 | orchestrator | | hostId | 08a29a25b1ee15ca3c4e0424b5255187887ee26abd9da3484ba8d08e | 2025-10-08 16:24:01.514347 | orchestrator | | host_status | None | 2025-10-08 16:24:01.514367 | orchestrator | | id | 3d7cdcca-58e4-4d03-834b-5d76740034ec | 2025-10-08 16:24:01.514379 | orchestrator | | image | N/A (booted from volume) | 2025-10-08 16:24:01.514402 | orchestrator | | key_name | test | 2025-10-08 16:24:01.514452 | orchestrator | | locked | False | 2025-10-08 16:24:01.514464 | orchestrator | | locked_reason | None | 2025-10-08 16:24:01.514476 | orchestrator | | name | test-2 | 2025-10-08 16:24:01.514487 | orchestrator | | pinned_availability_zone | None | 2025-10-08 16:24:01.514499 | orchestrator | | progress | 0 | 2025-10-08 16:24:01.514510 | orchestrator | | project_id | 1bce148884884742a649d9b28191b413 | 2025-10-08 16:24:01.514521 | orchestrator | | properties | hostname='test-2' | 2025-10-08 16:24:01.514540 | orchestrator | | security_groups | name='ssh' | 2025-10-08 16:24:01.514559 | orchestrator | | | name='icmp' | 2025-10-08 16:24:01.514571 | orchestrator | | server_groups | None | 2025-10-08 16:24:01.514583 | orchestrator | | status | ACTIVE | 2025-10-08 16:24:01.514594 | orchestrator | | tags | test | 2025-10-08 16:24:01.514606 | orchestrator | | trusted_image_certificates | None | 2025-10-08 16:24:01.514617 | orchestrator | | updated | 2025-10-08T16:22:37Z | 2025-10-08 16:24:01.514628 | orchestrator | | user_id | 7b4b3271349148f2b14539982b32ae42 | 2025-10-08 16:24:01.514639 | orchestrator | | volumes_attached | delete_on_termination='True', id='b21a66f5-d4d8-474c-9332-1046f34d06ac' | 2025-10-08 16:24:01.517764 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-10-08 16:24:01.817102 | orchestrator | + openstack --os-cloud test server show test-3 2025-10-08 16:24:05.121989 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-10-08 16:24:05.122148 | orchestrator | | Field | Value | 2025-10-08 16:24:05.122182 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-10-08 16:24:05.122196 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-10-08 16:24:05.122208 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-10-08 16:24:05.122220 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-10-08 16:24:05.122231 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-3 | 2025-10-08 16:24:05.122243 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-10-08 16:24:05.122254 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-10-08 16:24:05.122308 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-10-08 16:24:05.122322 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-10-08 16:24:05.122335 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-10-08 16:24:05.122352 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-10-08 16:24:05.122364 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-10-08 16:24:05.122376 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-10-08 16:24:05.122387 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-10-08 16:24:05.122398 | orchestrator | | OS-EXT-STS:task_state | None | 2025-10-08 16:24:05.122455 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-10-08 16:24:05.122476 | orchestrator | | OS-SRV-USG:launched_at | 2025-10-08T16:21:26.000000 | 2025-10-08 16:24:05.122496 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-10-08 16:24:05.122508 | orchestrator | | accessIPv4 | | 2025-10-08 16:24:05.122520 | orchestrator | | accessIPv6 | | 2025-10-08 16:24:05.122536 | orchestrator | | addresses | auto_allocated_network=10.42.0.7, 192.168.112.108 | 2025-10-08 16:24:05.122548 | orchestrator | | config_drive | | 2025-10-08 16:24:05.122559 | orchestrator | | created | 2025-10-08T16:21:00Z | 2025-10-08 16:24:05.122570 | orchestrator | | description | None | 2025-10-08 16:24:05.122581 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-10-08 16:24:05.122592 | orchestrator | | hostId | 3352e85059a7f54b1935ca175d6cea75d1d7bb08af1ac5780fc1beea | 2025-10-08 16:24:05.122610 | orchestrator | | host_status | None | 2025-10-08 16:24:05.122628 | orchestrator | | id | cb6ce180-ace5-436a-988e-28ea996765a2 | 2025-10-08 16:24:05.122640 | orchestrator | | image | N/A (booted from volume) | 2025-10-08 16:24:05.122656 | orchestrator | | key_name | test | 2025-10-08 16:24:05.122668 | orchestrator | | locked | False | 2025-10-08 16:24:05.122679 | orchestrator | | locked_reason | None | 2025-10-08 16:24:05.122690 | orchestrator | | name | test-3 | 2025-10-08 16:24:05.122701 | orchestrator | | pinned_availability_zone | None | 2025-10-08 16:24:05.122712 | orchestrator | | progress | 0 | 2025-10-08 16:24:05.122729 | orchestrator | | project_id | 1bce148884884742a649d9b28191b413 | 2025-10-08 16:24:05.122740 | orchestrator | | properties | hostname='test-3' | 2025-10-08 16:24:05.122759 | orchestrator | | security_groups | name='ssh' | 2025-10-08 16:24:05.122771 | orchestrator | | | name='icmp' | 2025-10-08 16:24:05.122787 | orchestrator | | server_groups | None | 2025-10-08 16:24:05.122799 | orchestrator | | status | ACTIVE | 2025-10-08 16:24:05.122810 | orchestrator | | tags | test | 2025-10-08 16:24:05.122821 | orchestrator | | trusted_image_certificates | None | 2025-10-08 16:24:05.122832 | orchestrator | | updated | 2025-10-08T16:22:42Z | 2025-10-08 16:24:05.122850 | orchestrator | | user_id | 7b4b3271349148f2b14539982b32ae42 | 2025-10-08 16:24:05.122861 | orchestrator | | volumes_attached | delete_on_termination='True', id='2997cbea-6348-4983-bf05-15b6f8becbc5' | 2025-10-08 16:24:05.126742 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-10-08 16:24:05.409508 | orchestrator | + openstack --os-cloud test server show test-4 2025-10-08 16:24:08.677278 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-10-08 16:24:08.677385 | orchestrator | | Field | Value | 2025-10-08 16:24:08.677472 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-10-08 16:24:08.677488 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-10-08 16:24:08.677500 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-10-08 16:24:08.677512 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-10-08 16:24:08.677544 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-4 | 2025-10-08 16:24:08.677556 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-10-08 16:24:08.677568 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-10-08 16:24:08.677598 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-10-08 16:24:08.677611 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-10-08 16:24:08.677622 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-10-08 16:24:08.677634 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-10-08 16:24:08.677645 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-10-08 16:24:08.677657 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-10-08 16:24:08.677677 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-10-08 16:24:08.677688 | orchestrator | | OS-EXT-STS:task_state | None | 2025-10-08 16:24:08.678177 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-10-08 16:24:08.678198 | orchestrator | | OS-SRV-USG:launched_at | 2025-10-08T16:22:11.000000 | 2025-10-08 16:24:08.678226 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-10-08 16:24:08.678239 | orchestrator | | accessIPv4 | | 2025-10-08 16:24:08.678251 | orchestrator | | accessIPv6 | | 2025-10-08 16:24:08.678262 | orchestrator | | addresses | auto_allocated_network=10.42.0.30, 192.168.112.191 | 2025-10-08 16:24:08.678274 | orchestrator | | config_drive | | 2025-10-08 16:24:08.678285 | orchestrator | | created | 2025-10-08T16:21:45Z | 2025-10-08 16:24:08.678305 | orchestrator | | description | None | 2025-10-08 16:24:08.678317 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-10-08 16:24:08.678328 | orchestrator | | hostId | 08a29a25b1ee15ca3c4e0424b5255187887ee26abd9da3484ba8d08e | 2025-10-08 16:24:08.678339 | orchestrator | | host_status | None | 2025-10-08 16:24:08.678363 | orchestrator | | id | bbe73cf0-65ae-41a2-96b9-a4c86fafb1dd | 2025-10-08 16:24:08.678375 | orchestrator | | image | N/A (booted from volume) | 2025-10-08 16:24:08.678386 | orchestrator | | key_name | test | 2025-10-08 16:24:08.678397 | orchestrator | | locked | False | 2025-10-08 16:24:08.678434 | orchestrator | | locked_reason | None | 2025-10-08 16:24:08.678454 | orchestrator | | name | test-4 | 2025-10-08 16:24:08.678465 | orchestrator | | pinned_availability_zone | None | 2025-10-08 16:24:08.678476 | orchestrator | | progress | 0 | 2025-10-08 16:24:08.678488 | orchestrator | | project_id | 1bce148884884742a649d9b28191b413 | 2025-10-08 16:24:08.678499 | orchestrator | | properties | hostname='test-4' | 2025-10-08 16:24:08.678522 | orchestrator | | security_groups | name='ssh' | 2025-10-08 16:24:08.678534 | orchestrator | | | name='icmp' | 2025-10-08 16:24:08.678546 | orchestrator | | server_groups | None | 2025-10-08 16:24:08.678557 | orchestrator | | status | ACTIVE | 2025-10-08 16:24:08.678575 | orchestrator | | tags | test | 2025-10-08 16:24:08.678587 | orchestrator | | trusted_image_certificates | None | 2025-10-08 16:24:08.678599 | orchestrator | | updated | 2025-10-08T16:22:46Z | 2025-10-08 16:24:08.678610 | orchestrator | | user_id | 7b4b3271349148f2b14539982b32ae42 | 2025-10-08 16:24:08.678622 | orchestrator | | volumes_attached | delete_on_termination='True', id='93f3ac0b-3c26-42fc-8614-59780624ada9' | 2025-10-08 16:24:08.680034 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-10-08 16:24:08.951833 | orchestrator | + server_ping 2025-10-08 16:24:08.953230 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-10-08 16:24:08.953254 | orchestrator | ++ tr -d '\r' 2025-10-08 16:24:11.918097 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-10-08 16:24:11.918169 | orchestrator | + ping -c3 192.168.112.108 2025-10-08 16:24:11.931363 | orchestrator | PING 192.168.112.108 (192.168.112.108) 56(84) bytes of data. 2025-10-08 16:24:11.931390 | orchestrator | 64 bytes from 192.168.112.108: icmp_seq=1 ttl=63 time=6.39 ms 2025-10-08 16:24:12.928468 | orchestrator | 64 bytes from 192.168.112.108: icmp_seq=2 ttl=63 time=2.53 ms 2025-10-08 16:24:13.930733 | orchestrator | 64 bytes from 192.168.112.108: icmp_seq=3 ttl=63 time=2.01 ms 2025-10-08 16:24:13.930821 | orchestrator | 2025-10-08 16:24:13.930839 | orchestrator | --- 192.168.112.108 ping statistics --- 2025-10-08 16:24:13.930847 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-10-08 16:24:13.930854 | orchestrator | rtt min/avg/max/mdev = 2.010/3.643/6.390/1.953 ms 2025-10-08 16:24:13.930869 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-10-08 16:24:13.930877 | orchestrator | + ping -c3 192.168.112.117 2025-10-08 16:24:13.944078 | orchestrator | PING 192.168.112.117 (192.168.112.117) 56(84) bytes of data. 2025-10-08 16:24:13.944173 | orchestrator | 64 bytes from 192.168.112.117: icmp_seq=1 ttl=63 time=8.55 ms 2025-10-08 16:24:14.939110 | orchestrator | 64 bytes from 192.168.112.117: icmp_seq=2 ttl=63 time=2.73 ms 2025-10-08 16:24:15.940902 | orchestrator | 64 bytes from 192.168.112.117: icmp_seq=3 ttl=63 time=1.74 ms 2025-10-08 16:24:15.940960 | orchestrator | 2025-10-08 16:24:15.940969 | orchestrator | --- 192.168.112.117 ping statistics --- 2025-10-08 16:24:15.940977 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-10-08 16:24:15.940984 | orchestrator | rtt min/avg/max/mdev = 1.744/4.339/8.545/3.000 ms 2025-10-08 16:24:15.940991 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-10-08 16:24:15.940998 | orchestrator | + ping -c3 192.168.112.101 2025-10-08 16:24:15.954059 | orchestrator | PING 192.168.112.101 (192.168.112.101) 56(84) bytes of data. 2025-10-08 16:24:15.954099 | orchestrator | 64 bytes from 192.168.112.101: icmp_seq=1 ttl=63 time=9.65 ms 2025-10-08 16:24:16.949131 | orchestrator | 64 bytes from 192.168.112.101: icmp_seq=2 ttl=63 time=2.72 ms 2025-10-08 16:24:17.950652 | orchestrator | 64 bytes from 192.168.112.101: icmp_seq=3 ttl=63 time=1.95 ms 2025-10-08 16:24:17.950732 | orchestrator | 2025-10-08 16:24:17.950746 | orchestrator | --- 192.168.112.101 ping statistics --- 2025-10-08 16:24:17.950757 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-10-08 16:24:17.950768 | orchestrator | rtt min/avg/max/mdev = 1.947/4.772/9.654/3.466 ms 2025-10-08 16:24:17.951260 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-10-08 16:24:17.951286 | orchestrator | + ping -c3 192.168.112.191 2025-10-08 16:24:17.964705 | orchestrator | PING 192.168.112.191 (192.168.112.191) 56(84) bytes of data. 2025-10-08 16:24:17.964761 | orchestrator | 64 bytes from 192.168.112.191: icmp_seq=1 ttl=63 time=8.65 ms 2025-10-08 16:24:18.960945 | orchestrator | 64 bytes from 192.168.112.191: icmp_seq=2 ttl=63 time=2.33 ms 2025-10-08 16:24:19.962087 | orchestrator | 64 bytes from 192.168.112.191: icmp_seq=3 ttl=63 time=2.10 ms 2025-10-08 16:24:19.962177 | orchestrator | 2025-10-08 16:24:19.962184 | orchestrator | --- 192.168.112.191 ping statistics --- 2025-10-08 16:24:19.962190 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-10-08 16:24:19.962194 | orchestrator | rtt min/avg/max/mdev = 2.095/4.357/8.652/3.038 ms 2025-10-08 16:24:19.962525 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-10-08 16:24:19.962536 | orchestrator | + ping -c3 192.168.112.141 2025-10-08 16:24:19.977533 | orchestrator | PING 192.168.112.141 (192.168.112.141) 56(84) bytes of data. 2025-10-08 16:24:19.977547 | orchestrator | 64 bytes from 192.168.112.141: icmp_seq=1 ttl=63 time=9.71 ms 2025-10-08 16:24:20.972738 | orchestrator | 64 bytes from 192.168.112.141: icmp_seq=2 ttl=63 time=2.75 ms 2025-10-08 16:24:21.974224 | orchestrator | 64 bytes from 192.168.112.141: icmp_seq=3 ttl=63 time=2.13 ms 2025-10-08 16:24:21.974287 | orchestrator | 2025-10-08 16:24:21.974293 | orchestrator | --- 192.168.112.141 ping statistics --- 2025-10-08 16:24:21.974299 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-10-08 16:24:21.974303 | orchestrator | rtt min/avg/max/mdev = 2.134/4.864/9.713/3.437 ms 2025-10-08 16:24:21.974895 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-10-08 16:24:21.975006 | orchestrator | + compute_list 2025-10-08 16:24:21.975014 | orchestrator | + osism manage compute list testbed-node-3 2025-10-08 16:24:25.757821 | orchestrator | +--------------------------------------+--------+----------+ 2025-10-08 16:24:25.757929 | orchestrator | | ID | Name | Status | 2025-10-08 16:24:25.757944 | orchestrator | |--------------------------------------+--------+----------| 2025-10-08 16:24:25.757955 | orchestrator | | cb6ce180-ace5-436a-988e-28ea996765a2 | test-3 | ACTIVE | 2025-10-08 16:24:25.757967 | orchestrator | | 8f716a21-acc2-4fea-a186-6c273630b28a | test-1 | ACTIVE | 2025-10-08 16:24:25.757978 | orchestrator | +--------------------------------------+--------+----------+ 2025-10-08 16:24:26.094330 | orchestrator | + osism manage compute list testbed-node-4 2025-10-08 16:24:29.521643 | orchestrator | +--------------------------------------+--------+----------+ 2025-10-08 16:24:29.521736 | orchestrator | | ID | Name | Status | 2025-10-08 16:24:29.521777 | orchestrator | |--------------------------------------+--------+----------| 2025-10-08 16:24:29.521788 | orchestrator | | bbe73cf0-65ae-41a2-96b9-a4c86fafb1dd | test-4 | ACTIVE | 2025-10-08 16:24:29.521798 | orchestrator | | 3d7cdcca-58e4-4d03-834b-5d76740034ec | test-2 | ACTIVE | 2025-10-08 16:24:29.521808 | orchestrator | +--------------------------------------+--------+----------+ 2025-10-08 16:24:29.859772 | orchestrator | + osism manage compute list testbed-node-5 2025-10-08 16:24:33.352028 | orchestrator | +--------------------------------------+--------+----------+ 2025-10-08 16:24:33.352121 | orchestrator | | ID | Name | Status | 2025-10-08 16:24:33.352137 | orchestrator | |--------------------------------------+--------+----------| 2025-10-08 16:24:33.352149 | orchestrator | | 08276b55-01a6-4764-84a1-ada18d59ff0d | test | ACTIVE | 2025-10-08 16:24:33.352161 | orchestrator | +--------------------------------------+--------+----------+ 2025-10-08 16:24:33.685198 | orchestrator | + osism manage compute migrate --yes --target testbed-node-3 testbed-node-4 2025-10-08 16:24:37.269877 | orchestrator | 2025-10-08 16:24:37 | INFO  | Live migrating server bbe73cf0-65ae-41a2-96b9-a4c86fafb1dd 2025-10-08 16:24:50.465308 | orchestrator | 2025-10-08 16:24:50 | INFO  | Live migration of bbe73cf0-65ae-41a2-96b9-a4c86fafb1dd (test-4) is still in progress 2025-10-08 16:24:52.843474 | orchestrator | 2025-10-08 16:24:52 | INFO  | Live migration of bbe73cf0-65ae-41a2-96b9-a4c86fafb1dd (test-4) is still in progress 2025-10-08 16:24:55.204076 | orchestrator | 2025-10-08 16:24:55 | INFO  | Live migration of bbe73cf0-65ae-41a2-96b9-a4c86fafb1dd (test-4) is still in progress 2025-10-08 16:24:57.473798 | orchestrator | 2025-10-08 16:24:57 | INFO  | Live migration of bbe73cf0-65ae-41a2-96b9-a4c86fafb1dd (test-4) is still in progress 2025-10-08 16:24:59.901602 | orchestrator | 2025-10-08 16:24:59 | INFO  | Live migration of bbe73cf0-65ae-41a2-96b9-a4c86fafb1dd (test-4) is still in progress 2025-10-08 16:25:02.195267 | orchestrator | 2025-10-08 16:25:02 | INFO  | Live migration of bbe73cf0-65ae-41a2-96b9-a4c86fafb1dd (test-4) is still in progress 2025-10-08 16:25:04.484963 | orchestrator | 2025-10-08 16:25:04 | INFO  | Live migration of bbe73cf0-65ae-41a2-96b9-a4c86fafb1dd (test-4) is still in progress 2025-10-08 16:25:06.768201 | orchestrator | 2025-10-08 16:25:06 | INFO  | Live migration of bbe73cf0-65ae-41a2-96b9-a4c86fafb1dd (test-4) is still in progress 2025-10-08 16:25:09.103024 | orchestrator | 2025-10-08 16:25:09 | INFO  | Live migration of bbe73cf0-65ae-41a2-96b9-a4c86fafb1dd (test-4) completed with status ACTIVE 2025-10-08 16:25:09.103105 | orchestrator | 2025-10-08 16:25:09 | INFO  | Live migrating server 3d7cdcca-58e4-4d03-834b-5d76740034ec 2025-10-08 16:25:20.040953 | orchestrator | 2025-10-08 16:25:20 | INFO  | Live migration of 3d7cdcca-58e4-4d03-834b-5d76740034ec (test-2) is still in progress 2025-10-08 16:25:22.448024 | orchestrator | 2025-10-08 16:25:22 | INFO  | Live migration of 3d7cdcca-58e4-4d03-834b-5d76740034ec (test-2) is still in progress 2025-10-08 16:25:24.811870 | orchestrator | 2025-10-08 16:25:24 | INFO  | Live migration of 3d7cdcca-58e4-4d03-834b-5d76740034ec (test-2) is still in progress 2025-10-08 16:25:27.210359 | orchestrator | 2025-10-08 16:25:27 | INFO  | Live migration of 3d7cdcca-58e4-4d03-834b-5d76740034ec (test-2) is still in progress 2025-10-08 16:25:29.521336 | orchestrator | 2025-10-08 16:25:29 | INFO  | Live migration of 3d7cdcca-58e4-4d03-834b-5d76740034ec (test-2) is still in progress 2025-10-08 16:25:31.899909 | orchestrator | 2025-10-08 16:25:31 | INFO  | Live migration of 3d7cdcca-58e4-4d03-834b-5d76740034ec (test-2) is still in progress 2025-10-08 16:25:34.200652 | orchestrator | 2025-10-08 16:25:34 | INFO  | Live migration of 3d7cdcca-58e4-4d03-834b-5d76740034ec (test-2) is still in progress 2025-10-08 16:25:36.460308 | orchestrator | 2025-10-08 16:25:36 | INFO  | Live migration of 3d7cdcca-58e4-4d03-834b-5d76740034ec (test-2) is still in progress 2025-10-08 16:25:38.868456 | orchestrator | 2025-10-08 16:25:38 | INFO  | Live migration of 3d7cdcca-58e4-4d03-834b-5d76740034ec (test-2) completed with status ACTIVE 2025-10-08 16:25:39.212575 | orchestrator | + compute_list 2025-10-08 16:25:39.212655 | orchestrator | + osism manage compute list testbed-node-3 2025-10-08 16:25:42.635133 | orchestrator | +--------------------------------------+--------+----------+ 2025-10-08 16:25:42.635245 | orchestrator | | ID | Name | Status | 2025-10-08 16:25:42.635262 | orchestrator | |--------------------------------------+--------+----------| 2025-10-08 16:25:42.635274 | orchestrator | | bbe73cf0-65ae-41a2-96b9-a4c86fafb1dd | test-4 | ACTIVE | 2025-10-08 16:25:42.635285 | orchestrator | | cb6ce180-ace5-436a-988e-28ea996765a2 | test-3 | ACTIVE | 2025-10-08 16:25:42.635296 | orchestrator | | 3d7cdcca-58e4-4d03-834b-5d76740034ec | test-2 | ACTIVE | 2025-10-08 16:25:42.635307 | orchestrator | | 8f716a21-acc2-4fea-a186-6c273630b28a | test-1 | ACTIVE | 2025-10-08 16:25:42.635318 | orchestrator | +--------------------------------------+--------+----------+ 2025-10-08 16:25:42.964091 | orchestrator | + osism manage compute list testbed-node-4 2025-10-08 16:25:45.804641 | orchestrator | +------+--------+----------+ 2025-10-08 16:25:45.804750 | orchestrator | | ID | Name | Status | 2025-10-08 16:25:45.804765 | orchestrator | |------+--------+----------| 2025-10-08 16:25:45.804777 | orchestrator | +------+--------+----------+ 2025-10-08 16:25:46.190223 | orchestrator | + osism manage compute list testbed-node-5 2025-10-08 16:25:49.292245 | orchestrator | +--------------------------------------+--------+----------+ 2025-10-08 16:25:49.292339 | orchestrator | | ID | Name | Status | 2025-10-08 16:25:49.292350 | orchestrator | |--------------------------------------+--------+----------| 2025-10-08 16:25:49.292358 | orchestrator | | 08276b55-01a6-4764-84a1-ada18d59ff0d | test | ACTIVE | 2025-10-08 16:25:49.292367 | orchestrator | +--------------------------------------+--------+----------+ 2025-10-08 16:25:49.597852 | orchestrator | + server_ping 2025-10-08 16:25:49.598944 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-10-08 16:25:49.598976 | orchestrator | ++ tr -d '\r' 2025-10-08 16:25:52.510781 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-10-08 16:25:52.511474 | orchestrator | + ping -c3 192.168.112.108 2025-10-08 16:25:52.520495 | orchestrator | PING 192.168.112.108 (192.168.112.108) 56(84) bytes of data. 2025-10-08 16:25:52.520521 | orchestrator | 64 bytes from 192.168.112.108: icmp_seq=1 ttl=63 time=7.46 ms 2025-10-08 16:25:53.517566 | orchestrator | 64 bytes from 192.168.112.108: icmp_seq=2 ttl=63 time=2.45 ms 2025-10-08 16:25:54.519519 | orchestrator | 64 bytes from 192.168.112.108: icmp_seq=3 ttl=63 time=2.21 ms 2025-10-08 16:25:54.520748 | orchestrator | 2025-10-08 16:25:54.520785 | orchestrator | --- 192.168.112.108 ping statistics --- 2025-10-08 16:25:54.520798 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-10-08 16:25:54.520810 | orchestrator | rtt min/avg/max/mdev = 2.209/4.038/7.455/2.417 ms 2025-10-08 16:25:54.520822 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-10-08 16:25:54.520834 | orchestrator | + ping -c3 192.168.112.117 2025-10-08 16:25:54.535869 | orchestrator | PING 192.168.112.117 (192.168.112.117) 56(84) bytes of data. 2025-10-08 16:25:54.535917 | orchestrator | 64 bytes from 192.168.112.117: icmp_seq=1 ttl=63 time=9.21 ms 2025-10-08 16:25:55.531065 | orchestrator | 64 bytes from 192.168.112.117: icmp_seq=2 ttl=63 time=2.95 ms 2025-10-08 16:25:56.531600 | orchestrator | 64 bytes from 192.168.112.117: icmp_seq=3 ttl=63 time=2.14 ms 2025-10-08 16:25:56.531698 | orchestrator | 2025-10-08 16:25:56.531714 | orchestrator | --- 192.168.112.117 ping statistics --- 2025-10-08 16:25:56.531727 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-10-08 16:25:56.531739 | orchestrator | rtt min/avg/max/mdev = 2.139/4.767/9.208/3.157 ms 2025-10-08 16:25:56.532516 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-10-08 16:25:56.532541 | orchestrator | + ping -c3 192.168.112.101 2025-10-08 16:25:56.546975 | orchestrator | PING 192.168.112.101 (192.168.112.101) 56(84) bytes of data. 2025-10-08 16:25:56.547026 | orchestrator | 64 bytes from 192.168.112.101: icmp_seq=1 ttl=63 time=10.6 ms 2025-10-08 16:25:57.541644 | orchestrator | 64 bytes from 192.168.112.101: icmp_seq=2 ttl=63 time=2.97 ms 2025-10-08 16:25:58.542460 | orchestrator | 64 bytes from 192.168.112.101: icmp_seq=3 ttl=63 time=2.41 ms 2025-10-08 16:25:58.542570 | orchestrator | 2025-10-08 16:25:58.542587 | orchestrator | --- 192.168.112.101 ping statistics --- 2025-10-08 16:25:58.542600 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-10-08 16:25:58.542611 | orchestrator | rtt min/avg/max/mdev = 2.408/5.315/10.565/3.719 ms 2025-10-08 16:25:58.543309 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-10-08 16:25:58.543335 | orchestrator | + ping -c3 192.168.112.191 2025-10-08 16:25:58.558147 | orchestrator | PING 192.168.112.191 (192.168.112.191) 56(84) bytes of data. 2025-10-08 16:25:58.558188 | orchestrator | 64 bytes from 192.168.112.191: icmp_seq=1 ttl=63 time=10.3 ms 2025-10-08 16:25:59.551888 | orchestrator | 64 bytes from 192.168.112.191: icmp_seq=2 ttl=63 time=2.32 ms 2025-10-08 16:26:00.553722 | orchestrator | 64 bytes from 192.168.112.191: icmp_seq=3 ttl=63 time=1.79 ms 2025-10-08 16:26:00.553820 | orchestrator | 2025-10-08 16:26:00.553836 | orchestrator | --- 192.168.112.191 ping statistics --- 2025-10-08 16:26:00.553848 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-10-08 16:26:00.553859 | orchestrator | rtt min/avg/max/mdev = 1.790/4.804/10.307/3.896 ms 2025-10-08 16:26:00.553871 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-10-08 16:26:00.553883 | orchestrator | + ping -c3 192.168.112.141 2025-10-08 16:26:00.568574 | orchestrator | PING 192.168.112.141 (192.168.112.141) 56(84) bytes of data. 2025-10-08 16:26:00.568668 | orchestrator | 64 bytes from 192.168.112.141: icmp_seq=1 ttl=63 time=11.1 ms 2025-10-08 16:26:01.561881 | orchestrator | 64 bytes from 192.168.112.141: icmp_seq=2 ttl=63 time=2.71 ms 2025-10-08 16:26:02.563911 | orchestrator | 64 bytes from 192.168.112.141: icmp_seq=3 ttl=63 time=2.01 ms 2025-10-08 16:26:02.564007 | orchestrator | 2025-10-08 16:26:02.564024 | orchestrator | --- 192.168.112.141 ping statistics --- 2025-10-08 16:26:02.564037 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-10-08 16:26:02.564048 | orchestrator | rtt min/avg/max/mdev = 2.012/5.275/11.107/4.133 ms 2025-10-08 16:26:02.564060 | orchestrator | + osism manage compute migrate --yes --target testbed-node-3 testbed-node-5 2025-10-08 16:26:05.799041 | orchestrator | 2025-10-08 16:26:05 | INFO  | Live migrating server 08276b55-01a6-4764-84a1-ada18d59ff0d 2025-10-08 16:26:17.532261 | orchestrator | 2025-10-08 16:26:17 | INFO  | Live migration of 08276b55-01a6-4764-84a1-ada18d59ff0d (test) is still in progress 2025-10-08 16:26:19.892286 | orchestrator | 2025-10-08 16:26:19 | INFO  | Live migration of 08276b55-01a6-4764-84a1-ada18d59ff0d (test) is still in progress 2025-10-08 16:26:22.236323 | orchestrator | 2025-10-08 16:26:22 | INFO  | Live migration of 08276b55-01a6-4764-84a1-ada18d59ff0d (test) is still in progress 2025-10-08 16:26:24.576893 | orchestrator | 2025-10-08 16:26:24 | INFO  | Live migration of 08276b55-01a6-4764-84a1-ada18d59ff0d (test) is still in progress 2025-10-08 16:26:26.857435 | orchestrator | 2025-10-08 16:26:26 | INFO  | Live migration of 08276b55-01a6-4764-84a1-ada18d59ff0d (test) is still in progress 2025-10-08 16:26:29.270963 | orchestrator | 2025-10-08 16:26:29 | INFO  | Live migration of 08276b55-01a6-4764-84a1-ada18d59ff0d (test) is still in progress 2025-10-08 16:26:31.631123 | orchestrator | 2025-10-08 16:26:31 | INFO  | Live migration of 08276b55-01a6-4764-84a1-ada18d59ff0d (test) is still in progress 2025-10-08 16:26:33.917824 | orchestrator | 2025-10-08 16:26:33 | INFO  | Live migration of 08276b55-01a6-4764-84a1-ada18d59ff0d (test) is still in progress 2025-10-08 16:26:36.166331 | orchestrator | 2025-10-08 16:26:36 | INFO  | Live migration of 08276b55-01a6-4764-84a1-ada18d59ff0d (test) is still in progress 2025-10-08 16:26:38.418573 | orchestrator | 2025-10-08 16:26:38 | INFO  | Live migration of 08276b55-01a6-4764-84a1-ada18d59ff0d (test) is still in progress 2025-10-08 16:26:40.743191 | orchestrator | 2025-10-08 16:26:40 | INFO  | Live migration of 08276b55-01a6-4764-84a1-ada18d59ff0d (test) completed with status ACTIVE 2025-10-08 16:26:41.083778 | orchestrator | + compute_list 2025-10-08 16:26:41.083840 | orchestrator | + osism manage compute list testbed-node-3 2025-10-08 16:26:44.367175 | orchestrator | +--------------------------------------+--------+----------+ 2025-10-08 16:26:44.367275 | orchestrator | | ID | Name | Status | 2025-10-08 16:26:44.367288 | orchestrator | |--------------------------------------+--------+----------| 2025-10-08 16:26:44.367299 | orchestrator | | bbe73cf0-65ae-41a2-96b9-a4c86fafb1dd | test-4 | ACTIVE | 2025-10-08 16:26:44.367309 | orchestrator | | cb6ce180-ace5-436a-988e-28ea996765a2 | test-3 | ACTIVE | 2025-10-08 16:26:44.367319 | orchestrator | | 3d7cdcca-58e4-4d03-834b-5d76740034ec | test-2 | ACTIVE | 2025-10-08 16:26:44.367329 | orchestrator | | 8f716a21-acc2-4fea-a186-6c273630b28a | test-1 | ACTIVE | 2025-10-08 16:26:44.367338 | orchestrator | | 08276b55-01a6-4764-84a1-ada18d59ff0d | test | ACTIVE | 2025-10-08 16:26:44.367348 | orchestrator | +--------------------------------------+--------+----------+ 2025-10-08 16:26:44.706788 | orchestrator | + osism manage compute list testbed-node-4 2025-10-08 16:26:47.507819 | orchestrator | +------+--------+----------+ 2025-10-08 16:26:47.507925 | orchestrator | | ID | Name | Status | 2025-10-08 16:26:47.507940 | orchestrator | |------+--------+----------| 2025-10-08 16:26:47.507952 | orchestrator | +------+--------+----------+ 2025-10-08 16:26:47.960322 | orchestrator | + osism manage compute list testbed-node-5 2025-10-08 16:26:50.910482 | orchestrator | +------+--------+----------+ 2025-10-08 16:26:50.910615 | orchestrator | | ID | Name | Status | 2025-10-08 16:26:50.910632 | orchestrator | |------+--------+----------| 2025-10-08 16:26:50.910644 | orchestrator | +------+--------+----------+ 2025-10-08 16:26:51.283904 | orchestrator | + server_ping 2025-10-08 16:26:51.284956 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-10-08 16:26:51.285810 | orchestrator | ++ tr -d '\r' 2025-10-08 16:26:54.166541 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-10-08 16:26:54.166641 | orchestrator | + ping -c3 192.168.112.108 2025-10-08 16:26:54.175336 | orchestrator | PING 192.168.112.108 (192.168.112.108) 56(84) bytes of data. 2025-10-08 16:26:54.175408 | orchestrator | 64 bytes from 192.168.112.108: icmp_seq=1 ttl=63 time=5.66 ms 2025-10-08 16:26:55.173954 | orchestrator | 64 bytes from 192.168.112.108: icmp_seq=2 ttl=63 time=2.43 ms 2025-10-08 16:26:56.174846 | orchestrator | 64 bytes from 192.168.112.108: icmp_seq=3 ttl=63 time=2.05 ms 2025-10-08 16:26:56.174973 | orchestrator | 2025-10-08 16:26:56.174993 | orchestrator | --- 192.168.112.108 ping statistics --- 2025-10-08 16:26:56.175007 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-10-08 16:26:56.175626 | orchestrator | rtt min/avg/max/mdev = 2.048/3.376/5.657/1.619 ms 2025-10-08 16:26:56.175812 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-10-08 16:26:56.175900 | orchestrator | + ping -c3 192.168.112.117 2025-10-08 16:26:56.189847 | orchestrator | PING 192.168.112.117 (192.168.112.117) 56(84) bytes of data. 2025-10-08 16:26:56.189904 | orchestrator | 64 bytes from 192.168.112.117: icmp_seq=1 ttl=63 time=8.47 ms 2025-10-08 16:26:57.185905 | orchestrator | 64 bytes from 192.168.112.117: icmp_seq=2 ttl=63 time=2.24 ms 2025-10-08 16:26:58.187047 | orchestrator | 64 bytes from 192.168.112.117: icmp_seq=3 ttl=63 time=1.83 ms 2025-10-08 16:26:58.187136 | orchestrator | 2025-10-08 16:26:58.187150 | orchestrator | --- 192.168.112.117 ping statistics --- 2025-10-08 16:26:58.187161 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-10-08 16:26:58.187172 | orchestrator | rtt min/avg/max/mdev = 1.825/4.180/8.471/3.039 ms 2025-10-08 16:26:58.187479 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-10-08 16:26:58.187499 | orchestrator | + ping -c3 192.168.112.101 2025-10-08 16:26:58.200741 | orchestrator | PING 192.168.112.101 (192.168.112.101) 56(84) bytes of data. 2025-10-08 16:26:58.200768 | orchestrator | 64 bytes from 192.168.112.101: icmp_seq=1 ttl=63 time=7.72 ms 2025-10-08 16:26:59.196805 | orchestrator | 64 bytes from 192.168.112.101: icmp_seq=2 ttl=63 time=2.43 ms 2025-10-08 16:27:00.198276 | orchestrator | 64 bytes from 192.168.112.101: icmp_seq=3 ttl=63 time=2.05 ms 2025-10-08 16:27:00.198421 | orchestrator | 2025-10-08 16:27:00.198439 | orchestrator | --- 192.168.112.101 ping statistics --- 2025-10-08 16:27:00.198452 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-10-08 16:27:00.198463 | orchestrator | rtt min/avg/max/mdev = 2.048/4.068/7.724/2.589 ms 2025-10-08 16:27:00.198613 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-10-08 16:27:00.198632 | orchestrator | + ping -c3 192.168.112.191 2025-10-08 16:27:00.209581 | orchestrator | PING 192.168.112.191 (192.168.112.191) 56(84) bytes of data. 2025-10-08 16:27:00.209627 | orchestrator | 64 bytes from 192.168.112.191: icmp_seq=1 ttl=63 time=7.00 ms 2025-10-08 16:27:01.205915 | orchestrator | 64 bytes from 192.168.112.191: icmp_seq=2 ttl=63 time=1.73 ms 2025-10-08 16:27:02.208205 | orchestrator | 64 bytes from 192.168.112.191: icmp_seq=3 ttl=63 time=2.05 ms 2025-10-08 16:27:02.208307 | orchestrator | 2025-10-08 16:27:02.208323 | orchestrator | --- 192.168.112.191 ping statistics --- 2025-10-08 16:27:02.208337 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-10-08 16:27:02.208348 | orchestrator | rtt min/avg/max/mdev = 1.729/3.593/7.001/2.413 ms 2025-10-08 16:27:02.208950 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-10-08 16:27:02.208974 | orchestrator | + ping -c3 192.168.112.141 2025-10-08 16:27:02.219970 | orchestrator | PING 192.168.112.141 (192.168.112.141) 56(84) bytes of data. 2025-10-08 16:27:02.220003 | orchestrator | 64 bytes from 192.168.112.141: icmp_seq=1 ttl=63 time=6.03 ms 2025-10-08 16:27:03.217476 | orchestrator | 64 bytes from 192.168.112.141: icmp_seq=2 ttl=63 time=1.94 ms 2025-10-08 16:27:04.218561 | orchestrator | 64 bytes from 192.168.112.141: icmp_seq=3 ttl=63 time=1.80 ms 2025-10-08 16:27:04.218655 | orchestrator | 2025-10-08 16:27:04.218671 | orchestrator | --- 192.168.112.141 ping statistics --- 2025-10-08 16:27:04.218684 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2025-10-08 16:27:04.218695 | orchestrator | rtt min/avg/max/mdev = 1.798/3.258/6.034/1.963 ms 2025-10-08 16:27:04.219114 | orchestrator | + osism manage compute migrate --yes --target testbed-node-4 testbed-node-3 2025-10-08 16:27:07.496685 | orchestrator | 2025-10-08 16:27:07 | INFO  | Live migrating server bbe73cf0-65ae-41a2-96b9-a4c86fafb1dd 2025-10-08 16:27:18.172154 | orchestrator | 2025-10-08 16:27:18 | INFO  | Live migration of bbe73cf0-65ae-41a2-96b9-a4c86fafb1dd (test-4) is still in progress 2025-10-08 16:27:20.501405 | orchestrator | 2025-10-08 16:27:20 | INFO  | Live migration of bbe73cf0-65ae-41a2-96b9-a4c86fafb1dd (test-4) is still in progress 2025-10-08 16:27:22.849851 | orchestrator | 2025-10-08 16:27:22 | INFO  | Live migration of bbe73cf0-65ae-41a2-96b9-a4c86fafb1dd (test-4) is still in progress 2025-10-08 16:27:25.145142 | orchestrator | 2025-10-08 16:27:25 | INFO  | Live migration of bbe73cf0-65ae-41a2-96b9-a4c86fafb1dd (test-4) is still in progress 2025-10-08 16:27:27.412793 | orchestrator | 2025-10-08 16:27:27 | INFO  | Live migration of bbe73cf0-65ae-41a2-96b9-a4c86fafb1dd (test-4) is still in progress 2025-10-08 16:27:29.674194 | orchestrator | 2025-10-08 16:27:29 | INFO  | Live migration of bbe73cf0-65ae-41a2-96b9-a4c86fafb1dd (test-4) is still in progress 2025-10-08 16:27:32.003858 | orchestrator | 2025-10-08 16:27:32 | INFO  | Live migration of bbe73cf0-65ae-41a2-96b9-a4c86fafb1dd (test-4) is still in progress 2025-10-08 16:27:34.275602 | orchestrator | 2025-10-08 16:27:34 | INFO  | Live migration of bbe73cf0-65ae-41a2-96b9-a4c86fafb1dd (test-4) is still in progress 2025-10-08 16:27:36.617313 | orchestrator | 2025-10-08 16:27:36 | INFO  | Live migration of bbe73cf0-65ae-41a2-96b9-a4c86fafb1dd (test-4) completed with status ACTIVE 2025-10-08 16:27:36.617454 | orchestrator | 2025-10-08 16:27:36 | INFO  | Live migrating server cb6ce180-ace5-436a-988e-28ea996765a2 2025-10-08 16:27:48.197205 | orchestrator | 2025-10-08 16:27:48 | INFO  | Live migration of cb6ce180-ace5-436a-988e-28ea996765a2 (test-3) is still in progress 2025-10-08 16:27:50.518699 | orchestrator | 2025-10-08 16:27:50 | INFO  | Live migration of cb6ce180-ace5-436a-988e-28ea996765a2 (test-3) is still in progress 2025-10-08 16:27:52.842002 | orchestrator | 2025-10-08 16:27:52 | INFO  | Live migration of cb6ce180-ace5-436a-988e-28ea996765a2 (test-3) is still in progress 2025-10-08 16:27:55.190662 | orchestrator | 2025-10-08 16:27:55 | INFO  | Live migration of cb6ce180-ace5-436a-988e-28ea996765a2 (test-3) is still in progress 2025-10-08 16:27:57.492263 | orchestrator | 2025-10-08 16:27:57 | INFO  | Live migration of cb6ce180-ace5-436a-988e-28ea996765a2 (test-3) is still in progress 2025-10-08 16:27:59.828504 | orchestrator | 2025-10-08 16:27:59 | INFO  | Live migration of cb6ce180-ace5-436a-988e-28ea996765a2 (test-3) is still in progress 2025-10-08 16:28:02.155794 | orchestrator | 2025-10-08 16:28:02 | INFO  | Live migration of cb6ce180-ace5-436a-988e-28ea996765a2 (test-3) is still in progress 2025-10-08 16:28:04.431675 | orchestrator | 2025-10-08 16:28:04 | INFO  | Live migration of cb6ce180-ace5-436a-988e-28ea996765a2 (test-3) is still in progress 2025-10-08 16:28:06.750188 | orchestrator | 2025-10-08 16:28:06 | INFO  | Live migration of cb6ce180-ace5-436a-988e-28ea996765a2 (test-3) is still in progress 2025-10-08 16:28:09.102898 | orchestrator | 2025-10-08 16:28:09 | INFO  | Live migration of cb6ce180-ace5-436a-988e-28ea996765a2 (test-3) completed with status ACTIVE 2025-10-08 16:28:09.102996 | orchestrator | 2025-10-08 16:28:09 | INFO  | Live migrating server 3d7cdcca-58e4-4d03-834b-5d76740034ec 2025-10-08 16:28:22.171083 | orchestrator | 2025-10-08 16:28:22 | INFO  | Live migration of 3d7cdcca-58e4-4d03-834b-5d76740034ec (test-2) is still in progress 2025-10-08 16:28:24.533262 | orchestrator | 2025-10-08 16:28:24 | INFO  | Live migration of 3d7cdcca-58e4-4d03-834b-5d76740034ec (test-2) is still in progress 2025-10-08 16:28:26.871558 | orchestrator | 2025-10-08 16:28:26 | INFO  | Live migration of 3d7cdcca-58e4-4d03-834b-5d76740034ec (test-2) is still in progress 2025-10-08 16:28:29.192099 | orchestrator | 2025-10-08 16:28:29 | INFO  | Live migration of 3d7cdcca-58e4-4d03-834b-5d76740034ec (test-2) is still in progress 2025-10-08 16:28:31.477773 | orchestrator | 2025-10-08 16:28:31 | INFO  | Live migration of 3d7cdcca-58e4-4d03-834b-5d76740034ec (test-2) is still in progress 2025-10-08 16:28:33.767838 | orchestrator | 2025-10-08 16:28:33 | INFO  | Live migration of 3d7cdcca-58e4-4d03-834b-5d76740034ec (test-2) is still in progress 2025-10-08 16:28:36.119712 | orchestrator | 2025-10-08 16:28:36 | INFO  | Live migration of 3d7cdcca-58e4-4d03-834b-5d76740034ec (test-2) is still in progress 2025-10-08 16:28:38.391514 | orchestrator | 2025-10-08 16:28:38 | INFO  | Live migration of 3d7cdcca-58e4-4d03-834b-5d76740034ec (test-2) is still in progress 2025-10-08 16:28:40.728628 | orchestrator | 2025-10-08 16:28:40 | INFO  | Live migration of 3d7cdcca-58e4-4d03-834b-5d76740034ec (test-2) completed with status ACTIVE 2025-10-08 16:28:40.728727 | orchestrator | 2025-10-08 16:28:40 | INFO  | Live migrating server 8f716a21-acc2-4fea-a186-6c273630b28a 2025-10-08 16:28:51.901150 | orchestrator | 2025-10-08 16:28:51 | INFO  | Live migration of 8f716a21-acc2-4fea-a186-6c273630b28a (test-1) is still in progress 2025-10-08 16:28:54.394080 | orchestrator | 2025-10-08 16:28:54 | INFO  | Live migration of 8f716a21-acc2-4fea-a186-6c273630b28a (test-1) is still in progress 2025-10-08 16:28:56.856289 | orchestrator | 2025-10-08 16:28:56 | INFO  | Live migration of 8f716a21-acc2-4fea-a186-6c273630b28a (test-1) is still in progress 2025-10-08 16:28:59.216814 | orchestrator | 2025-10-08 16:28:59 | INFO  | Live migration of 8f716a21-acc2-4fea-a186-6c273630b28a (test-1) is still in progress 2025-10-08 16:29:01.673226 | orchestrator | 2025-10-08 16:29:01 | INFO  | Live migration of 8f716a21-acc2-4fea-a186-6c273630b28a (test-1) is still in progress 2025-10-08 16:29:03.958296 | orchestrator | 2025-10-08 16:29:03 | INFO  | Live migration of 8f716a21-acc2-4fea-a186-6c273630b28a (test-1) is still in progress 2025-10-08 16:29:06.237977 | orchestrator | 2025-10-08 16:29:06 | INFO  | Live migration of 8f716a21-acc2-4fea-a186-6c273630b28a (test-1) is still in progress 2025-10-08 16:29:08.578958 | orchestrator | 2025-10-08 16:29:08 | INFO  | Live migration of 8f716a21-acc2-4fea-a186-6c273630b28a (test-1) is still in progress 2025-10-08 16:29:11.053138 | orchestrator | 2025-10-08 16:29:11 | INFO  | Live migration of 8f716a21-acc2-4fea-a186-6c273630b28a (test-1) completed with status ACTIVE 2025-10-08 16:29:11.053241 | orchestrator | 2025-10-08 16:29:11 | INFO  | Live migrating server 08276b55-01a6-4764-84a1-ada18d59ff0d 2025-10-08 16:29:22.441178 | orchestrator | 2025-10-08 16:29:22 | INFO  | Live migration of 08276b55-01a6-4764-84a1-ada18d59ff0d (test) is still in progress 2025-10-08 16:29:24.775992 | orchestrator | 2025-10-08 16:29:24 | INFO  | Live migration of 08276b55-01a6-4764-84a1-ada18d59ff0d (test) is still in progress 2025-10-08 16:29:27.110978 | orchestrator | 2025-10-08 16:29:27 | INFO  | Live migration of 08276b55-01a6-4764-84a1-ada18d59ff0d (test) is still in progress 2025-10-08 16:29:29.403027 | orchestrator | 2025-10-08 16:29:29 | INFO  | Live migration of 08276b55-01a6-4764-84a1-ada18d59ff0d (test) is still in progress 2025-10-08 16:29:31.846837 | orchestrator | 2025-10-08 16:29:31 | INFO  | Live migration of 08276b55-01a6-4764-84a1-ada18d59ff0d (test) is still in progress 2025-10-08 16:29:34.109228 | orchestrator | 2025-10-08 16:29:34 | INFO  | Live migration of 08276b55-01a6-4764-84a1-ada18d59ff0d (test) is still in progress 2025-10-08 16:29:36.406483 | orchestrator | 2025-10-08 16:29:36 | INFO  | Live migration of 08276b55-01a6-4764-84a1-ada18d59ff0d (test) is still in progress 2025-10-08 16:29:38.726347 | orchestrator | 2025-10-08 16:29:38 | INFO  | Live migration of 08276b55-01a6-4764-84a1-ada18d59ff0d (test) is still in progress 2025-10-08 16:29:41.019852 | orchestrator | 2025-10-08 16:29:41 | INFO  | Live migration of 08276b55-01a6-4764-84a1-ada18d59ff0d (test) is still in progress 2025-10-08 16:29:43.372360 | orchestrator | 2025-10-08 16:29:43 | INFO  | Live migration of 08276b55-01a6-4764-84a1-ada18d59ff0d (test) completed with status ACTIVE 2025-10-08 16:29:43.760465 | orchestrator | + compute_list 2025-10-08 16:29:43.760537 | orchestrator | + osism manage compute list testbed-node-3 2025-10-08 16:29:46.654928 | orchestrator | +------+--------+----------+ 2025-10-08 16:29:46.655028 | orchestrator | | ID | Name | Status | 2025-10-08 16:29:46.655042 | orchestrator | |------+--------+----------| 2025-10-08 16:29:46.655053 | orchestrator | +------+--------+----------+ 2025-10-08 16:29:46.979745 | orchestrator | + osism manage compute list testbed-node-4 2025-10-08 16:29:50.290199 | orchestrator | +--------------------------------------+--------+----------+ 2025-10-08 16:29:50.290339 | orchestrator | | ID | Name | Status | 2025-10-08 16:29:50.290356 | orchestrator | |--------------------------------------+--------+----------| 2025-10-08 16:29:50.290369 | orchestrator | | bbe73cf0-65ae-41a2-96b9-a4c86fafb1dd | test-4 | ACTIVE | 2025-10-08 16:29:50.290380 | orchestrator | | cb6ce180-ace5-436a-988e-28ea996765a2 | test-3 | ACTIVE | 2025-10-08 16:29:50.290391 | orchestrator | | 3d7cdcca-58e4-4d03-834b-5d76740034ec | test-2 | ACTIVE | 2025-10-08 16:29:50.290402 | orchestrator | | 8f716a21-acc2-4fea-a186-6c273630b28a | test-1 | ACTIVE | 2025-10-08 16:29:50.290413 | orchestrator | | 08276b55-01a6-4764-84a1-ada18d59ff0d | test | ACTIVE | 2025-10-08 16:29:50.290457 | orchestrator | +--------------------------------------+--------+----------+ 2025-10-08 16:29:50.622775 | orchestrator | + osism manage compute list testbed-node-5 2025-10-08 16:29:53.438465 | orchestrator | +------+--------+----------+ 2025-10-08 16:29:53.438570 | orchestrator | | ID | Name | Status | 2025-10-08 16:29:53.438583 | orchestrator | |------+--------+----------| 2025-10-08 16:29:53.438595 | orchestrator | +------+--------+----------+ 2025-10-08 16:29:53.778655 | orchestrator | + server_ping 2025-10-08 16:29:53.779969 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-10-08 16:29:53.780797 | orchestrator | ++ tr -d '\r' 2025-10-08 16:29:57.002978 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-10-08 16:29:57.003086 | orchestrator | + ping -c3 192.168.112.108 2025-10-08 16:29:57.010904 | orchestrator | PING 192.168.112.108 (192.168.112.108) 56(84) bytes of data. 2025-10-08 16:29:57.010928 | orchestrator | 64 bytes from 192.168.112.108: icmp_seq=1 ttl=63 time=6.24 ms 2025-10-08 16:29:58.009108 | orchestrator | 64 bytes from 192.168.112.108: icmp_seq=2 ttl=63 time=2.31 ms 2025-10-08 16:29:59.010682 | orchestrator | 64 bytes from 192.168.112.108: icmp_seq=3 ttl=63 time=2.11 ms 2025-10-08 16:29:59.010778 | orchestrator | 2025-10-08 16:29:59.010794 | orchestrator | --- 192.168.112.108 ping statistics --- 2025-10-08 16:29:59.010807 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-10-08 16:29:59.010818 | orchestrator | rtt min/avg/max/mdev = 2.114/3.555/6.240/1.900 ms 2025-10-08 16:29:59.011487 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-10-08 16:29:59.011509 | orchestrator | + ping -c3 192.168.112.117 2025-10-08 16:29:59.025487 | orchestrator | PING 192.168.112.117 (192.168.112.117) 56(84) bytes of data. 2025-10-08 16:29:59.025520 | orchestrator | 64 bytes from 192.168.112.117: icmp_seq=1 ttl=63 time=7.67 ms 2025-10-08 16:30:00.021420 | orchestrator | 64 bytes from 192.168.112.117: icmp_seq=2 ttl=63 time=2.64 ms 2025-10-08 16:30:01.021701 | orchestrator | 64 bytes from 192.168.112.117: icmp_seq=3 ttl=63 time=1.69 ms 2025-10-08 16:30:01.021810 | orchestrator | 2025-10-08 16:30:01.021827 | orchestrator | --- 192.168.112.117 ping statistics --- 2025-10-08 16:30:01.021840 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2025-10-08 16:30:01.021852 | orchestrator | rtt min/avg/max/mdev = 1.687/3.999/7.672/2.626 ms 2025-10-08 16:30:01.022129 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-10-08 16:30:01.022152 | orchestrator | + ping -c3 192.168.112.101 2025-10-08 16:30:01.037203 | orchestrator | PING 192.168.112.101 (192.168.112.101) 56(84) bytes of data. 2025-10-08 16:30:01.037258 | orchestrator | 64 bytes from 192.168.112.101: icmp_seq=1 ttl=63 time=9.68 ms 2025-10-08 16:30:02.032088 | orchestrator | 64 bytes from 192.168.112.101: icmp_seq=2 ttl=63 time=2.35 ms 2025-10-08 16:30:03.033928 | orchestrator | 64 bytes from 192.168.112.101: icmp_seq=3 ttl=63 time=1.98 ms 2025-10-08 16:30:03.034087 | orchestrator | 2025-10-08 16:30:03.034106 | orchestrator | --- 192.168.112.101 ping statistics --- 2025-10-08 16:30:03.034119 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-10-08 16:30:03.034131 | orchestrator | rtt min/avg/max/mdev = 1.977/4.667/9.677/3.545 ms 2025-10-08 16:30:03.034143 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-10-08 16:30:03.034155 | orchestrator | + ping -c3 192.168.112.191 2025-10-08 16:30:03.046223 | orchestrator | PING 192.168.112.191 (192.168.112.191) 56(84) bytes of data. 2025-10-08 16:30:03.046333 | orchestrator | 64 bytes from 192.168.112.191: icmp_seq=1 ttl=63 time=8.19 ms 2025-10-08 16:30:04.041762 | orchestrator | 64 bytes from 192.168.112.191: icmp_seq=2 ttl=63 time=2.09 ms 2025-10-08 16:30:05.043337 | orchestrator | 64 bytes from 192.168.112.191: icmp_seq=3 ttl=63 time=2.05 ms 2025-10-08 16:30:05.043442 | orchestrator | 2025-10-08 16:30:05.043458 | orchestrator | --- 192.168.112.191 ping statistics --- 2025-10-08 16:30:05.043472 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-10-08 16:30:05.043484 | orchestrator | rtt min/avg/max/mdev = 2.053/4.110/8.189/2.884 ms 2025-10-08 16:30:05.044363 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-10-08 16:30:05.044417 | orchestrator | + ping -c3 192.168.112.141 2025-10-08 16:30:05.055423 | orchestrator | PING 192.168.112.141 (192.168.112.141) 56(84) bytes of data. 2025-10-08 16:30:05.055453 | orchestrator | 64 bytes from 192.168.112.141: icmp_seq=1 ttl=63 time=6.83 ms 2025-10-08 16:30:06.052282 | orchestrator | 64 bytes from 192.168.112.141: icmp_seq=2 ttl=63 time=2.39 ms 2025-10-08 16:30:07.054387 | orchestrator | 64 bytes from 192.168.112.141: icmp_seq=3 ttl=63 time=2.08 ms 2025-10-08 16:30:07.054487 | orchestrator | 2025-10-08 16:30:07.054505 | orchestrator | --- 192.168.112.141 ping statistics --- 2025-10-08 16:30:07.054518 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-10-08 16:30:07.054530 | orchestrator | rtt min/avg/max/mdev = 2.082/3.765/6.829/2.169 ms 2025-10-08 16:30:07.054542 | orchestrator | + osism manage compute migrate --yes --target testbed-node-5 testbed-node-4 2025-10-08 16:30:10.309823 | orchestrator | 2025-10-08 16:30:10 | INFO  | Live migrating server bbe73cf0-65ae-41a2-96b9-a4c86fafb1dd 2025-10-08 16:30:20.598917 | orchestrator | 2025-10-08 16:30:20 | INFO  | Live migration of bbe73cf0-65ae-41a2-96b9-a4c86fafb1dd (test-4) is still in progress 2025-10-08 16:30:22.934260 | orchestrator | 2025-10-08 16:30:22 | INFO  | Live migration of bbe73cf0-65ae-41a2-96b9-a4c86fafb1dd (test-4) is still in progress 2025-10-08 16:30:25.243639 | orchestrator | 2025-10-08 16:30:25 | INFO  | Live migration of bbe73cf0-65ae-41a2-96b9-a4c86fafb1dd (test-4) is still in progress 2025-10-08 16:30:27.572859 | orchestrator | 2025-10-08 16:30:27 | INFO  | Live migration of bbe73cf0-65ae-41a2-96b9-a4c86fafb1dd (test-4) is still in progress 2025-10-08 16:30:29.858623 | orchestrator | 2025-10-08 16:30:29 | INFO  | Live migration of bbe73cf0-65ae-41a2-96b9-a4c86fafb1dd (test-4) is still in progress 2025-10-08 16:30:32.150528 | orchestrator | 2025-10-08 16:30:32 | INFO  | Live migration of bbe73cf0-65ae-41a2-96b9-a4c86fafb1dd (test-4) is still in progress 2025-10-08 16:30:34.406541 | orchestrator | 2025-10-08 16:30:34 | INFO  | Live migration of bbe73cf0-65ae-41a2-96b9-a4c86fafb1dd (test-4) is still in progress 2025-10-08 16:30:36.751859 | orchestrator | 2025-10-08 16:30:36 | INFO  | Live migration of bbe73cf0-65ae-41a2-96b9-a4c86fafb1dd (test-4) is still in progress 2025-10-08 16:30:39.045038 | orchestrator | 2025-10-08 16:30:39 | INFO  | Live migration of bbe73cf0-65ae-41a2-96b9-a4c86fafb1dd (test-4) completed with status ACTIVE 2025-10-08 16:30:39.045132 | orchestrator | 2025-10-08 16:30:39 | INFO  | Live migrating server cb6ce180-ace5-436a-988e-28ea996765a2 2025-10-08 16:30:49.685974 | orchestrator | 2025-10-08 16:30:49 | INFO  | Live migration of cb6ce180-ace5-436a-988e-28ea996765a2 (test-3) is still in progress 2025-10-08 16:30:51.993691 | orchestrator | 2025-10-08 16:30:51 | INFO  | Live migration of cb6ce180-ace5-436a-988e-28ea996765a2 (test-3) is still in progress 2025-10-08 16:30:54.363798 | orchestrator | 2025-10-08 16:30:54 | INFO  | Live migration of cb6ce180-ace5-436a-988e-28ea996765a2 (test-3) is still in progress 2025-10-08 16:30:56.717722 | orchestrator | 2025-10-08 16:30:56 | INFO  | Live migration of cb6ce180-ace5-436a-988e-28ea996765a2 (test-3) is still in progress 2025-10-08 16:30:58.974732 | orchestrator | 2025-10-08 16:30:58 | INFO  | Live migration of cb6ce180-ace5-436a-988e-28ea996765a2 (test-3) is still in progress 2025-10-08 16:31:01.320647 | orchestrator | 2025-10-08 16:31:01 | INFO  | Live migration of cb6ce180-ace5-436a-988e-28ea996765a2 (test-3) is still in progress 2025-10-08 16:31:03.582814 | orchestrator | 2025-10-08 16:31:03 | INFO  | Live migration of cb6ce180-ace5-436a-988e-28ea996765a2 (test-3) is still in progress 2025-10-08 16:31:05.954108 | orchestrator | 2025-10-08 16:31:05 | INFO  | Live migration of cb6ce180-ace5-436a-988e-28ea996765a2 (test-3) is still in progress 2025-10-08 16:31:08.287221 | orchestrator | 2025-10-08 16:31:08 | INFO  | Live migration of cb6ce180-ace5-436a-988e-28ea996765a2 (test-3) completed with status ACTIVE 2025-10-08 16:31:08.287396 | orchestrator | 2025-10-08 16:31:08 | INFO  | Live migrating server 3d7cdcca-58e4-4d03-834b-5d76740034ec 2025-10-08 16:31:17.978694 | orchestrator | 2025-10-08 16:31:17 | INFO  | Live migration of 3d7cdcca-58e4-4d03-834b-5d76740034ec (test-2) is still in progress 2025-10-08 16:31:20.336225 | orchestrator | 2025-10-08 16:31:20 | INFO  | Live migration of 3d7cdcca-58e4-4d03-834b-5d76740034ec (test-2) is still in progress 2025-10-08 16:31:22.697805 | orchestrator | 2025-10-08 16:31:22 | INFO  | Live migration of 3d7cdcca-58e4-4d03-834b-5d76740034ec (test-2) is still in progress 2025-10-08 16:31:25.085609 | orchestrator | 2025-10-08 16:31:25 | INFO  | Live migration of 3d7cdcca-58e4-4d03-834b-5d76740034ec (test-2) is still in progress 2025-10-08 16:31:27.336060 | orchestrator | 2025-10-08 16:31:27 | INFO  | Live migration of 3d7cdcca-58e4-4d03-834b-5d76740034ec (test-2) is still in progress 2025-10-08 16:31:29.618596 | orchestrator | 2025-10-08 16:31:29 | INFO  | Live migration of 3d7cdcca-58e4-4d03-834b-5d76740034ec (test-2) is still in progress 2025-10-08 16:31:31.948043 | orchestrator | 2025-10-08 16:31:31 | INFO  | Live migration of 3d7cdcca-58e4-4d03-834b-5d76740034ec (test-2) is still in progress 2025-10-08 16:31:34.343240 | orchestrator | 2025-10-08 16:31:34 | INFO  | Live migration of 3d7cdcca-58e4-4d03-834b-5d76740034ec (test-2) is still in progress 2025-10-08 16:31:36.642984 | orchestrator | 2025-10-08 16:31:36 | INFO  | Live migration of 3d7cdcca-58e4-4d03-834b-5d76740034ec (test-2) completed with status ACTIVE 2025-10-08 16:31:36.643087 | orchestrator | 2025-10-08 16:31:36 | INFO  | Live migrating server 8f716a21-acc2-4fea-a186-6c273630b28a 2025-10-08 16:31:46.409686 | orchestrator | 2025-10-08 16:31:46 | INFO  | Live migration of 8f716a21-acc2-4fea-a186-6c273630b28a (test-1) is still in progress 2025-10-08 16:31:48.762106 | orchestrator | 2025-10-08 16:31:48 | INFO  | Live migration of 8f716a21-acc2-4fea-a186-6c273630b28a (test-1) is still in progress 2025-10-08 16:31:51.091133 | orchestrator | 2025-10-08 16:31:51 | INFO  | Live migration of 8f716a21-acc2-4fea-a186-6c273630b28a (test-1) is still in progress 2025-10-08 16:31:53.426335 | orchestrator | 2025-10-08 16:31:53 | INFO  | Live migration of 8f716a21-acc2-4fea-a186-6c273630b28a (test-1) is still in progress 2025-10-08 16:31:55.703551 | orchestrator | 2025-10-08 16:31:55 | INFO  | Live migration of 8f716a21-acc2-4fea-a186-6c273630b28a (test-1) is still in progress 2025-10-08 16:31:58.071833 | orchestrator | 2025-10-08 16:31:58 | INFO  | Live migration of 8f716a21-acc2-4fea-a186-6c273630b28a (test-1) is still in progress 2025-10-08 16:32:00.454328 | orchestrator | 2025-10-08 16:32:00 | INFO  | Live migration of 8f716a21-acc2-4fea-a186-6c273630b28a (test-1) is still in progress 2025-10-08 16:32:02.728162 | orchestrator | 2025-10-08 16:32:02 | INFO  | Live migration of 8f716a21-acc2-4fea-a186-6c273630b28a (test-1) is still in progress 2025-10-08 16:32:05.004699 | orchestrator | 2025-10-08 16:32:05 | INFO  | Live migration of 8f716a21-acc2-4fea-a186-6c273630b28a (test-1) is still in progress 2025-10-08 16:32:07.332122 | orchestrator | 2025-10-08 16:32:07 | INFO  | Live migration of 8f716a21-acc2-4fea-a186-6c273630b28a (test-1) completed with status ACTIVE 2025-10-08 16:32:07.332228 | orchestrator | 2025-10-08 16:32:07 | INFO  | Live migrating server 08276b55-01a6-4764-84a1-ada18d59ff0d 2025-10-08 16:32:17.560050 | orchestrator | 2025-10-08 16:32:17 | INFO  | Live migration of 08276b55-01a6-4764-84a1-ada18d59ff0d (test) is still in progress 2025-10-08 16:32:19.894684 | orchestrator | 2025-10-08 16:32:19 | INFO  | Live migration of 08276b55-01a6-4764-84a1-ada18d59ff0d (test) is still in progress 2025-10-08 16:32:22.264175 | orchestrator | 2025-10-08 16:32:22 | INFO  | Live migration of 08276b55-01a6-4764-84a1-ada18d59ff0d (test) is still in progress 2025-10-08 16:32:24.631137 | orchestrator | 2025-10-08 16:32:24 | INFO  | Live migration of 08276b55-01a6-4764-84a1-ada18d59ff0d (test) is still in progress 2025-10-08 16:32:26.954810 | orchestrator | 2025-10-08 16:32:26 | INFO  | Live migration of 08276b55-01a6-4764-84a1-ada18d59ff0d (test) is still in progress 2025-10-08 16:32:29.294222 | orchestrator | 2025-10-08 16:32:29 | INFO  | Live migration of 08276b55-01a6-4764-84a1-ada18d59ff0d (test) is still in progress 2025-10-08 16:32:31.559710 | orchestrator | 2025-10-08 16:32:31 | INFO  | Live migration of 08276b55-01a6-4764-84a1-ada18d59ff0d (test) is still in progress 2025-10-08 16:32:33.784669 | orchestrator | 2025-10-08 16:32:33 | INFO  | Live migration of 08276b55-01a6-4764-84a1-ada18d59ff0d (test) is still in progress 2025-10-08 16:32:36.073117 | orchestrator | 2025-10-08 16:32:36 | INFO  | Live migration of 08276b55-01a6-4764-84a1-ada18d59ff0d (test) is still in progress 2025-10-08 16:32:38.378583 | orchestrator | 2025-10-08 16:32:38 | INFO  | Live migration of 08276b55-01a6-4764-84a1-ada18d59ff0d (test) is still in progress 2025-10-08 16:32:40.645531 | orchestrator | 2025-10-08 16:32:40 | INFO  | Live migration of 08276b55-01a6-4764-84a1-ada18d59ff0d (test) is still in progress 2025-10-08 16:32:42.999746 | orchestrator | 2025-10-08 16:32:42 | INFO  | Live migration of 08276b55-01a6-4764-84a1-ada18d59ff0d (test) completed with status ACTIVE 2025-10-08 16:32:43.366876 | orchestrator | + compute_list 2025-10-08 16:32:43.366952 | orchestrator | + osism manage compute list testbed-node-3 2025-10-08 16:32:46.270651 | orchestrator | +------+--------+----------+ 2025-10-08 16:32:46.270753 | orchestrator | | ID | Name | Status | 2025-10-08 16:32:46.270767 | orchestrator | |------+--------+----------| 2025-10-08 16:32:46.270778 | orchestrator | +------+--------+----------+ 2025-10-08 16:32:46.611862 | orchestrator | + osism manage compute list testbed-node-4 2025-10-08 16:32:49.524324 | orchestrator | +------+--------+----------+ 2025-10-08 16:32:49.524410 | orchestrator | | ID | Name | Status | 2025-10-08 16:32:49.524419 | orchestrator | |------+--------+----------| 2025-10-08 16:32:49.524426 | orchestrator | +------+--------+----------+ 2025-10-08 16:32:49.852399 | orchestrator | + osism manage compute list testbed-node-5 2025-10-08 16:32:53.097123 | orchestrator | +--------------------------------------+--------+----------+ 2025-10-08 16:32:53.097222 | orchestrator | | ID | Name | Status | 2025-10-08 16:32:53.097284 | orchestrator | |--------------------------------------+--------+----------| 2025-10-08 16:32:53.097295 | orchestrator | | bbe73cf0-65ae-41a2-96b9-a4c86fafb1dd | test-4 | ACTIVE | 2025-10-08 16:32:53.097302 | orchestrator | | cb6ce180-ace5-436a-988e-28ea996765a2 | test-3 | ACTIVE | 2025-10-08 16:32:53.097311 | orchestrator | | 3d7cdcca-58e4-4d03-834b-5d76740034ec | test-2 | ACTIVE | 2025-10-08 16:32:53.097319 | orchestrator | | 8f716a21-acc2-4fea-a186-6c273630b28a | test-1 | ACTIVE | 2025-10-08 16:32:53.097327 | orchestrator | | 08276b55-01a6-4764-84a1-ada18d59ff0d | test | ACTIVE | 2025-10-08 16:32:53.097335 | orchestrator | +--------------------------------------+--------+----------+ 2025-10-08 16:32:53.400705 | orchestrator | + server_ping 2025-10-08 16:32:53.402360 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-10-08 16:32:53.402391 | orchestrator | ++ tr -d '\r' 2025-10-08 16:32:56.290485 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-10-08 16:32:56.290590 | orchestrator | + ping -c3 192.168.112.108 2025-10-08 16:32:56.302256 | orchestrator | PING 192.168.112.108 (192.168.112.108) 56(84) bytes of data. 2025-10-08 16:32:56.302284 | orchestrator | 64 bytes from 192.168.112.108: icmp_seq=1 ttl=63 time=10.0 ms 2025-10-08 16:32:57.296448 | orchestrator | 64 bytes from 192.168.112.108: icmp_seq=2 ttl=63 time=2.62 ms 2025-10-08 16:32:58.297322 | orchestrator | 64 bytes from 192.168.112.108: icmp_seq=3 ttl=63 time=2.02 ms 2025-10-08 16:32:58.297450 | orchestrator | 2025-10-08 16:32:58.297467 | orchestrator | --- 192.168.112.108 ping statistics --- 2025-10-08 16:32:58.297481 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2025-10-08 16:32:58.297508 | orchestrator | rtt min/avg/max/mdev = 2.015/4.892/10.042/3.649 ms 2025-10-08 16:32:58.297520 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-10-08 16:32:58.297532 | orchestrator | + ping -c3 192.168.112.117 2025-10-08 16:32:58.305590 | orchestrator | PING 192.168.112.117 (192.168.112.117) 56(84) bytes of data. 2025-10-08 16:32:58.305612 | orchestrator | 64 bytes from 192.168.112.117: icmp_seq=1 ttl=63 time=5.33 ms 2025-10-08 16:32:59.304897 | orchestrator | 64 bytes from 192.168.112.117: icmp_seq=2 ttl=63 time=2.54 ms 2025-10-08 16:33:00.306160 | orchestrator | 64 bytes from 192.168.112.117: icmp_seq=3 ttl=63 time=1.83 ms 2025-10-08 16:33:00.306271 | orchestrator | 2025-10-08 16:33:00.306287 | orchestrator | --- 192.168.112.117 ping statistics --- 2025-10-08 16:33:00.306298 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-10-08 16:33:00.306308 | orchestrator | rtt min/avg/max/mdev = 1.830/3.231/5.325/1.508 ms 2025-10-08 16:33:00.306563 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-10-08 16:33:00.306582 | orchestrator | + ping -c3 192.168.112.101 2025-10-08 16:33:00.321486 | orchestrator | PING 192.168.112.101 (192.168.112.101) 56(84) bytes of data. 2025-10-08 16:33:00.321512 | orchestrator | 64 bytes from 192.168.112.101: icmp_seq=1 ttl=63 time=9.06 ms 2025-10-08 16:33:01.316719 | orchestrator | 64 bytes from 192.168.112.101: icmp_seq=2 ttl=63 time=2.63 ms 2025-10-08 16:33:02.318483 | orchestrator | 64 bytes from 192.168.112.101: icmp_seq=3 ttl=63 time=1.91 ms 2025-10-08 16:33:02.318568 | orchestrator | 2025-10-08 16:33:02.318583 | orchestrator | --- 192.168.112.101 ping statistics --- 2025-10-08 16:33:02.318596 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-10-08 16:33:02.318608 | orchestrator | rtt min/avg/max/mdev = 1.912/4.533/9.061/3.214 ms 2025-10-08 16:33:02.318852 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-10-08 16:33:02.318956 | orchestrator | + ping -c3 192.168.112.191 2025-10-08 16:33:02.330896 | orchestrator | PING 192.168.112.191 (192.168.112.191) 56(84) bytes of data. 2025-10-08 16:33:02.330929 | orchestrator | 64 bytes from 192.168.112.191: icmp_seq=1 ttl=63 time=7.26 ms 2025-10-08 16:33:03.327972 | orchestrator | 64 bytes from 192.168.112.191: icmp_seq=2 ttl=63 time=2.61 ms 2025-10-08 16:33:04.329555 | orchestrator | 64 bytes from 192.168.112.191: icmp_seq=3 ttl=63 time=1.97 ms 2025-10-08 16:33:04.329656 | orchestrator | 2025-10-08 16:33:04.329674 | orchestrator | --- 192.168.112.191 ping statistics --- 2025-10-08 16:33:04.329687 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-10-08 16:33:04.329699 | orchestrator | rtt min/avg/max/mdev = 1.970/3.947/7.260/2.357 ms 2025-10-08 16:33:04.330190 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-10-08 16:33:04.330313 | orchestrator | + ping -c3 192.168.112.141 2025-10-08 16:33:04.345087 | orchestrator | PING 192.168.112.141 (192.168.112.141) 56(84) bytes of data. 2025-10-08 16:33:04.345109 | orchestrator | 64 bytes from 192.168.112.141: icmp_seq=1 ttl=63 time=9.53 ms 2025-10-08 16:33:05.339878 | orchestrator | 64 bytes from 192.168.112.141: icmp_seq=2 ttl=63 time=2.22 ms 2025-10-08 16:33:06.341661 | orchestrator | 64 bytes from 192.168.112.141: icmp_seq=3 ttl=63 time=2.52 ms 2025-10-08 16:33:06.341759 | orchestrator | 2025-10-08 16:33:06.341775 | orchestrator | --- 192.168.112.141 ping statistics --- 2025-10-08 16:33:06.341788 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-10-08 16:33:06.341799 | orchestrator | rtt min/avg/max/mdev = 2.215/4.755/9.527/3.376 ms 2025-10-08 16:33:06.451186 | orchestrator | ok: Runtime: 0:20:23.960052 2025-10-08 16:33:06.495232 | 2025-10-08 16:33:06.495361 | TASK [Run tempest] 2025-10-08 16:33:07.030605 | orchestrator | skipping: Conditional result was False 2025-10-08 16:33:07.041787 | 2025-10-08 16:33:07.041965 | TASK [Check prometheus alert status] 2025-10-08 16:33:07.575462 | orchestrator | skipping: Conditional result was False 2025-10-08 16:33:07.577979 | 2025-10-08 16:33:07.578134 | PLAY RECAP 2025-10-08 16:33:07.578255 | orchestrator | ok: 24 changed: 11 unreachable: 0 failed: 0 skipped: 5 rescued: 0 ignored: 0 2025-10-08 16:33:07.578307 | 2025-10-08 16:33:07.787464 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2025-10-08 16:33:07.788479 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-10-08 16:33:08.534955 | 2025-10-08 16:33:08.535113 | PLAY [Post output play] 2025-10-08 16:33:08.551417 | 2025-10-08 16:33:08.551559 | LOOP [stage-output : Register sources] 2025-10-08 16:33:08.606825 | 2025-10-08 16:33:08.607120 | TASK [stage-output : Check sudo] 2025-10-08 16:33:09.453539 | orchestrator | sudo: a password is required 2025-10-08 16:33:09.646810 | orchestrator | ok: Runtime: 0:00:00.015834 2025-10-08 16:33:09.661131 | 2025-10-08 16:33:09.661281 | LOOP [stage-output : Set source and destination for files and folders] 2025-10-08 16:33:09.696960 | 2025-10-08 16:33:09.697224 | TASK [stage-output : Build a list of source, dest dictionaries] 2025-10-08 16:33:09.767578 | orchestrator | ok 2025-10-08 16:33:09.776802 | 2025-10-08 16:33:09.777002 | LOOP [stage-output : Ensure target folders exist] 2025-10-08 16:33:10.210516 | orchestrator | ok: "docs" 2025-10-08 16:33:10.220826 | 2025-10-08 16:33:10.464090 | orchestrator | ok: "artifacts" 2025-10-08 16:33:10.708886 | orchestrator | ok: "logs" 2025-10-08 16:33:10.724857 | 2025-10-08 16:33:10.725014 | LOOP [stage-output : Copy files and folders to staging folder] 2025-10-08 16:33:10.760951 | 2025-10-08 16:33:10.761233 | TASK [stage-output : Make all log files readable] 2025-10-08 16:33:11.024463 | orchestrator | ok 2025-10-08 16:33:11.030762 | 2025-10-08 16:33:11.030949 | TASK [stage-output : Rename log files that match extensions_to_txt] 2025-10-08 16:33:11.065164 | orchestrator | skipping: Conditional result was False 2025-10-08 16:33:11.075620 | 2025-10-08 16:33:11.075732 | TASK [stage-output : Discover log files for compression] 2025-10-08 16:33:11.099968 | orchestrator | skipping: Conditional result was False 2025-10-08 16:33:11.112460 | 2025-10-08 16:33:11.112588 | LOOP [stage-output : Archive everything from logs] 2025-10-08 16:33:11.158943 | 2025-10-08 16:33:11.159206 | PLAY [Post cleanup play] 2025-10-08 16:33:11.167789 | 2025-10-08 16:33:11.167893 | TASK [Set cloud fact (Zuul deployment)] 2025-10-08 16:33:11.234813 | orchestrator | ok 2025-10-08 16:33:11.246236 | 2025-10-08 16:33:11.246353 | TASK [Set cloud fact (local deployment)] 2025-10-08 16:33:11.271060 | orchestrator | skipping: Conditional result was False 2025-10-08 16:33:11.283341 | 2025-10-08 16:33:11.283467 | TASK [Clean the cloud environment] 2025-10-08 16:33:12.474881 | orchestrator | 2025-10-08 16:33:12 - clean up servers 2025-10-08 16:33:13.255363 | orchestrator | 2025-10-08 16:33:13 - testbed-manager 2025-10-08 16:33:13.341842 | orchestrator | 2025-10-08 16:33:13 - testbed-node-0 2025-10-08 16:33:13.429488 | orchestrator | 2025-10-08 16:33:13 - testbed-node-3 2025-10-08 16:33:13.510319 | orchestrator | 2025-10-08 16:33:13 - testbed-node-1 2025-10-08 16:33:13.604753 | orchestrator | 2025-10-08 16:33:13 - testbed-node-4 2025-10-08 16:33:13.697237 | orchestrator | 2025-10-08 16:33:13 - testbed-node-2 2025-10-08 16:33:13.790000 | orchestrator | 2025-10-08 16:33:13 - testbed-node-5 2025-10-08 16:33:13.877725 | orchestrator | 2025-10-08 16:33:13 - clean up keypairs 2025-10-08 16:33:13.894859 | orchestrator | 2025-10-08 16:33:13 - testbed 2025-10-08 16:33:13.914512 | orchestrator | 2025-10-08 16:33:13 - wait for servers to be gone 2025-10-08 16:33:24.700808 | orchestrator | 2025-10-08 16:33:24 - clean up ports 2025-10-08 16:33:24.891576 | orchestrator | 2025-10-08 16:33:24 - 76966401-2a8c-4a91-92fb-384931771a13 2025-10-08 16:33:25.158139 | orchestrator | 2025-10-08 16:33:25 - 77271b21-0285-4f16-8292-94cc8b0e2e18 2025-10-08 16:33:25.423919 | orchestrator | 2025-10-08 16:33:25 - 9b81b530-8a55-43de-a786-ac4088fc75ce 2025-10-08 16:33:25.668626 | orchestrator | 2025-10-08 16:33:25 - aa6c3f92-5c53-4d4e-acee-05b15a5c26cb 2025-10-08 16:33:25.879545 | orchestrator | 2025-10-08 16:33:25 - b15d084f-6e2b-40ad-9844-590a4a256e6f 2025-10-08 16:33:26.145560 | orchestrator | 2025-10-08 16:33:26 - f31a03d2-74ce-48af-a9ed-496b9001baa1 2025-10-08 16:33:26.558135 | orchestrator | 2025-10-08 16:33:26 - f7f4b3e5-eca5-4c4e-95fe-64c3e267a9fa 2025-10-08 16:33:26.779405 | orchestrator | 2025-10-08 16:33:26 - clean up volumes 2025-10-08 16:33:26.894519 | orchestrator | 2025-10-08 16:33:26 - testbed-volume-1-node-base 2025-10-08 16:33:26.940779 | orchestrator | 2025-10-08 16:33:26 - testbed-volume-3-node-base 2025-10-08 16:33:26.987177 | orchestrator | 2025-10-08 16:33:26 - testbed-volume-4-node-base 2025-10-08 16:33:27.031846 | orchestrator | 2025-10-08 16:33:27 - testbed-volume-2-node-base 2025-10-08 16:33:27.072040 | orchestrator | 2025-10-08 16:33:27 - testbed-volume-0-node-base 2025-10-08 16:33:27.112873 | orchestrator | 2025-10-08 16:33:27 - testbed-volume-5-node-base 2025-10-08 16:33:27.154562 | orchestrator | 2025-10-08 16:33:27 - testbed-volume-manager-base 2025-10-08 16:33:27.198260 | orchestrator | 2025-10-08 16:33:27 - testbed-volume-7-node-4 2025-10-08 16:33:27.239177 | orchestrator | 2025-10-08 16:33:27 - testbed-volume-0-node-3 2025-10-08 16:33:27.284480 | orchestrator | 2025-10-08 16:33:27 - testbed-volume-2-node-5 2025-10-08 16:33:27.326786 | orchestrator | 2025-10-08 16:33:27 - testbed-volume-6-node-3 2025-10-08 16:33:27.369364 | orchestrator | 2025-10-08 16:33:27 - testbed-volume-8-node-5 2025-10-08 16:33:27.417573 | orchestrator | 2025-10-08 16:33:27 - testbed-volume-3-node-3 2025-10-08 16:33:27.460283 | orchestrator | 2025-10-08 16:33:27 - testbed-volume-5-node-5 2025-10-08 16:33:27.501451 | orchestrator | 2025-10-08 16:33:27 - testbed-volume-4-node-4 2025-10-08 16:33:27.548074 | orchestrator | 2025-10-08 16:33:27 - testbed-volume-1-node-4 2025-10-08 16:33:27.590481 | orchestrator | 2025-10-08 16:33:27 - disconnect routers 2025-10-08 16:33:28.192289 | orchestrator | 2025-10-08 16:33:28 - testbed 2025-10-08 16:33:29.157771 | orchestrator | 2025-10-08 16:33:29 - clean up subnets 2025-10-08 16:33:29.226308 | orchestrator | 2025-10-08 16:33:29 - subnet-testbed-management 2025-10-08 16:33:29.382321 | orchestrator | 2025-10-08 16:33:29 - clean up networks 2025-10-08 16:33:29.589331 | orchestrator | 2025-10-08 16:33:29 - net-testbed-management 2025-10-08 16:33:29.884887 | orchestrator | 2025-10-08 16:33:29 - clean up security groups 2025-10-08 16:33:29.929769 | orchestrator | 2025-10-08 16:33:29 - testbed-node 2025-10-08 16:33:30.035736 | orchestrator | 2025-10-08 16:33:30 - testbed-management 2025-10-08 16:33:30.152122 | orchestrator | 2025-10-08 16:33:30 - clean up floating ips 2025-10-08 16:33:30.184518 | orchestrator | 2025-10-08 16:33:30 - 81.163.193.175 2025-10-08 16:33:30.585798 | orchestrator | 2025-10-08 16:33:30 - clean up routers 2025-10-08 16:33:30.713793 | orchestrator | 2025-10-08 16:33:30 - testbed 2025-10-08 16:33:31.841816 | orchestrator | ok: Runtime: 0:00:19.964443 2025-10-08 16:33:31.845262 | 2025-10-08 16:33:31.845392 | PLAY RECAP 2025-10-08 16:33:31.845480 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2025-10-08 16:33:31.845520 | 2025-10-08 16:33:31.967166 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-10-08 16:33:31.969465 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-10-08 16:33:32.678808 | 2025-10-08 16:33:32.679003 | PLAY [Cleanup play] 2025-10-08 16:33:32.694686 | 2025-10-08 16:33:32.694809 | TASK [Set cloud fact (Zuul deployment)] 2025-10-08 16:33:32.755397 | orchestrator | ok 2025-10-08 16:33:32.762045 | 2025-10-08 16:33:32.762167 | TASK [Set cloud fact (local deployment)] 2025-10-08 16:33:32.796025 | orchestrator | skipping: Conditional result was False 2025-10-08 16:33:32.806347 | 2025-10-08 16:33:32.806457 | TASK [Clean the cloud environment] 2025-10-08 16:33:33.995015 | orchestrator | 2025-10-08 16:33:33 - clean up servers 2025-10-08 16:33:34.450648 | orchestrator | 2025-10-08 16:33:34 - clean up keypairs 2025-10-08 16:33:34.465674 | orchestrator | 2025-10-08 16:33:34 - wait for servers to be gone 2025-10-08 16:33:34.508351 | orchestrator | 2025-10-08 16:33:34 - clean up ports 2025-10-08 16:33:34.592510 | orchestrator | 2025-10-08 16:33:34 - clean up volumes 2025-10-08 16:33:34.653304 | orchestrator | 2025-10-08 16:33:34 - disconnect routers 2025-10-08 16:33:34.685834 | orchestrator | 2025-10-08 16:33:34 - clean up subnets 2025-10-08 16:33:34.705233 | orchestrator | 2025-10-08 16:33:34 - clean up networks 2025-10-08 16:33:34.828042 | orchestrator | 2025-10-08 16:33:34 - clean up security groups 2025-10-08 16:33:34.862816 | orchestrator | 2025-10-08 16:33:34 - clean up floating ips 2025-10-08 16:33:34.891502 | orchestrator | 2025-10-08 16:33:34 - clean up routers 2025-10-08 16:33:35.342264 | orchestrator | ok: Runtime: 0:00:01.323698 2025-10-08 16:33:35.346085 | 2025-10-08 16:33:35.346255 | PLAY RECAP 2025-10-08 16:33:35.346379 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2025-10-08 16:33:35.346441 | 2025-10-08 16:33:35.464650 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-10-08 16:33:35.465637 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-10-08 16:33:36.180335 | 2025-10-08 16:33:36.180485 | PLAY [Base post-fetch] 2025-10-08 16:33:36.195448 | 2025-10-08 16:33:36.195565 | TASK [fetch-output : Set log path for multiple nodes] 2025-10-08 16:33:36.251073 | orchestrator | skipping: Conditional result was False 2025-10-08 16:33:36.264621 | 2025-10-08 16:33:36.264816 | TASK [fetch-output : Set log path for single node] 2025-10-08 16:33:36.322030 | orchestrator | ok 2025-10-08 16:33:36.330320 | 2025-10-08 16:33:36.330448 | LOOP [fetch-output : Ensure local output dirs] 2025-10-08 16:33:36.797786 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/508b257440874fc3bc38a3dc0806d28d/work/logs" 2025-10-08 16:33:37.069921 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/508b257440874fc3bc38a3dc0806d28d/work/artifacts" 2025-10-08 16:33:37.334898 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/508b257440874fc3bc38a3dc0806d28d/work/docs" 2025-10-08 16:33:37.359633 | 2025-10-08 16:33:37.359778 | LOOP [fetch-output : Collect logs, artifacts and docs] 2025-10-08 16:33:38.267728 | orchestrator | changed: .d..t...... ./ 2025-10-08 16:33:38.268006 | orchestrator | changed: All items complete 2025-10-08 16:33:38.268050 | 2025-10-08 16:33:38.987260 | orchestrator | changed: .d..t...... ./ 2025-10-08 16:33:39.685908 | orchestrator | changed: .d..t...... ./ 2025-10-08 16:33:39.707234 | 2025-10-08 16:33:39.707354 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2025-10-08 16:33:39.743288 | orchestrator | skipping: Conditional result was False 2025-10-08 16:33:39.747117 | orchestrator | skipping: Conditional result was False 2025-10-08 16:33:39.769562 | 2025-10-08 16:33:39.769670 | PLAY RECAP 2025-10-08 16:33:39.769745 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2025-10-08 16:33:39.769781 | 2025-10-08 16:33:39.894221 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-10-08 16:33:39.895260 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-10-08 16:33:40.623724 | 2025-10-08 16:33:40.623904 | PLAY [Base post] 2025-10-08 16:33:40.638412 | 2025-10-08 16:33:40.638546 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2025-10-08 16:33:41.551734 | orchestrator | changed 2025-10-08 16:33:41.563077 | 2025-10-08 16:33:41.563202 | PLAY RECAP 2025-10-08 16:33:41.563278 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2025-10-08 16:33:41.563357 | 2025-10-08 16:33:41.671897 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-10-08 16:33:41.672893 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2025-10-08 16:33:42.467736 | 2025-10-08 16:33:42.467900 | PLAY [Base post-logs] 2025-10-08 16:33:42.478244 | 2025-10-08 16:33:42.478375 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2025-10-08 16:33:42.923641 | localhost | changed 2025-10-08 16:33:42.940134 | 2025-10-08 16:33:42.940306 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2025-10-08 16:33:42.978175 | localhost | ok 2025-10-08 16:33:42.984871 | 2025-10-08 16:33:42.985054 | TASK [Set zuul-log-path fact] 2025-10-08 16:33:43.001895 | localhost | ok 2025-10-08 16:33:43.014103 | 2025-10-08 16:33:43.014239 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-10-08 16:33:43.051363 | localhost | ok 2025-10-08 16:33:43.057683 | 2025-10-08 16:33:43.057853 | TASK [upload-logs : Create log directories] 2025-10-08 16:33:43.575156 | localhost | changed 2025-10-08 16:33:43.579551 | 2025-10-08 16:33:43.579686 | TASK [upload-logs : Ensure logs are readable before uploading] 2025-10-08 16:33:44.091457 | localhost -> localhost | ok: Runtime: 0:00:00.007026 2025-10-08 16:33:44.101084 | 2025-10-08 16:33:44.101273 | TASK [upload-logs : Upload logs to log server] 2025-10-08 16:33:44.645130 | localhost | Output suppressed because no_log was given 2025-10-08 16:33:44.647389 | 2025-10-08 16:33:44.647505 | LOOP [upload-logs : Compress console log and json output] 2025-10-08 16:33:44.716931 | localhost | skipping: Conditional result was False 2025-10-08 16:33:44.724608 | localhost | skipping: Conditional result was False 2025-10-08 16:33:44.735277 | 2025-10-08 16:33:44.735387 | LOOP [upload-logs : Upload compressed console log and json output] 2025-10-08 16:33:44.792667 | localhost | skipping: Conditional result was False 2025-10-08 16:33:44.793216 | 2025-10-08 16:33:44.796686 | localhost | skipping: Conditional result was False 2025-10-08 16:33:44.803917 | 2025-10-08 16:33:44.804149 | LOOP [upload-logs : Upload console log and json output]